The next frontier for data privacy professionals

THE recent spotlight on DeepSeek and other emerging AI models has drawn attention to not just the democratization of AI, but its associated governance challenges.

Lack of clear data usage policies and compliance certifications creates a governance gap, presenting a significant opportunity for data protection professionals. As experts in data privacy and compliance, they can play a crucial role in putting in place governance structures to ensure that AI is used responsibly and ethically.

The imperative for AI governance

As organizations embrace generative AI to enhance productivity and innovation, many fall into the trap of overestimating their readiness for sustainable and safe digital transformation. Such transformation requires not just technology integration, but also guardrails and regulatory compliance.

With a great number of evolving AI-related policies currently in circulation, the task of navigating this volatile domain can be daunting for those in compliance or data privacy functions. AI governance training can help professionals working in AI development, deployment, or oversight to develop a comprehensive understanding of the ethical and regulatory frameworks surrounding the adoption and use of AI technologies. Training also equips data privacy professionals with the ability to assess potential risks associated with AI systems, devise mitigation strategies, and establish internal guidelines that reflect both local and international standards, such as the ISO/IEC 42001 AI Management System.

At the same time, personal competency should not be confused with organizational capability — a distinction that has significant implications for executing a well-governed adoption of AI. Applied to AI governance, this means that the regulatory knowledge gained by governance professionals should be coupled with defined AI use cases relevant to their organizations and the larger outcomes they wish to achieve. This ensures that AI initiatives are not only aligned with legal and ethical standards, but organizational strategies for digital transformation too.

AI policies, governance in PH

While official announcements about AI-specific policies for 2025 in the Philippines are yet to materialize, the introduction of the Artificial Intelligence Act (HB 10944) into the Philippine House of Representatives in 2024 provides valuable insights into the regulatory trajectory. The Act proposes the formation of the Philippine Artificial Intelligence Board (PAIB) to oversee AI systems development, research, application, and compliance, alongside maintaining a centralized database of AI companies and laboratories in the Philippines. Violations of the Act will be met with penalties, with legislation prohibiting the use of any AI systems that would cause unnecessary, unjustifiable, and indiscriminate moral or pecuniary damage to individuals. These developments underscore the need for proactive preparation among Data Protection Officers (DPOs) or compliance professionals, especially those who work in companies that develop or deploy AI in their processes.

Drawing upon existing laws such as the Data Privacy Act (DPA), Consumer Act, and Cybersecurity Law, professionals can anticipate and shape company policies that prioritize human oversight, ethical AI use grounded in transparency and accountability, and robust data governance. They should also regularly monitor updates from the Department of Trade and Industry (DTI) and other regulators such as the Bangko Sentral ng Pilipinas (BSP) for regulatory issuances that cover AI technology adoption. This forward-thinking mindset aligns with the Philippine government’s emphasis on innovation, as highlighted in initiatives like the DTI’s National AI Strategy 2.0.

AI for competency-building

Generative AI can redefine education, particularly in developing personalized learning experiences. Data professionals can benefit from such AI-enabled learning as it supports a range of teaching strategies that enhance adult learning and engagement (such as the flipped classroom approach) and facilitates deeper understanding of AI governance.

For example, the deployment of interactive AI tutors facilitates pre-class and in-class learning, and provides tailored help to students in preparing for their certification exams. They can provide on-demand explanations tailored to learners’ contextual needs, while scenario-based learning enhances critical thinking by simulating real-world challenges. This allows participants to get hands-on with AI governance scenarios within a guided environment.

The applications of such AI-blended courses are far-reaching and can extend to the education of other stakeholders in AI governance. This can include getting management decision-makers to understand the associated risks of AI, so as to get buy-in for putting in place guardrails prior to AI integration, or providing staff training on how to use, create, and manage AI tools responsibly. Ultimately, all these tie in with nurturing the right competencies for an ultimately successful digital transformation in the organization.

Generative AI in data protection

Generative AI can also be leveraged to build and actionize data protection practices in an organization. With its current capabilities, it is possible to build a custom chatbot that allows employees to upload and check suspicious emails for possible malicious intent and provide advice on what next steps to take. It can also automate repetitive tasks like vulnerability scanning and patching, freeing up human analysts for more strategic, complex work in the overall data protection and governance program. Of course, these tools must be built in a secure and controlled environment such that conversation histories are not leaked to unintended parties and the organization can control the data on which the bot is trained.

Shaping the future

AI governance training is not just about compliance; it is about empowering professionals to lead responsibly in an AI-driven world. The successful integration of AI technologies depends on data protection professionals’ ability to conduct due diligence, uphold ethical standards, and keep up with emerging regulations. Knowledgeable, AI-savvy governance professionals who can advocate for transparent and accountable AI development and deployment can help bridge the governance gap, paving a road where AI technologies are used in a way that benefits society while minimizing potential harm.

Alvin Toh is the co-founder and chief marketing officer at Straits Interactive, a company that delivers end-to-end governance, risk, and compliance solutions that enable globally trusted business and responsible marketing, particularly in the areas of data protection and privacy.