Tertiary Infotech AcademyTertiary Infotech Academy
Why Kajima Chose AI Governance Training in 2026

Why Kajima Chose AI Governance Training in 2026

Author: Tertiary Infotech AcademyCreated On: 14-05-2026
Share

How a leading construction firm tackled PDPA risk, copyright liability, and data leaks by upskilling staff on responsible generative AI — fully funded via WSQ and SFEC.

When AI Adoption Meets Legal Risk

Generative AI tools are now everywhere — from drafting tender documents to summarising meeting notes. But for many organisations, the excitement of productivity gains is shadowed by a pressing question: what happens when something goes wrong?

On 27 April 2026, Dr. Alfred Ang delivered a corporate run of our WSQ programme, Responsible Generative AI Basics (TGS-2025060472), for the team at Kajima — one of Japan's foremost construction and engineering groups with a significant Singapore presence.

The concerns that brought Kajima to the training room are ones we hear across industries: PDPA compliance, the risk of confidential data being ingested by public AI models, copyright exposure from AI-generated content, and the absence of a clear internal governance framework. Sound familiar?

The Governance Gap Most Companies Ignore

Many organisations have permitted the use of AI tools without having governed them. Staff experiment with ChatGPT, Copilot, or Gemini — sometimes pasting in client data, internal reports, or proprietary figures — without realising the legal and reputational exposure that creates.

Under Singapore's Personal Data Protection Act (PDPA), organisations remain accountable for how personal data is handled, even when a third-party AI platform processes it. A single careless prompt can constitute a data breach.

Copyright is equally fraught. AI-generated text, images, and code may reproduce protected material, and the liability often sits with the organisation that deployed the tool — not the AI vendor.

What the WSQ Programme Covers

Our one-day, 8-hour course is structured around three practical modules that give participants both the conceptual framework and hands-on skills to use generative AI responsibly:

  • Ethical Principles of Generative AI — risk assessment, professional scepticism, and applying ethical reasoning when evaluating AI outputs
  • Privacy Techniques — data anonymisation, de-identification, and secure storage practices that align with PDPA obligations
  • Best Practices for Responsible Development — intellectual property considerations, environmental impact, and building an internal AI governance baseline

The programme is mapped to the Skills Framework competency ICT-BAS-0055-1.1: Responsible AI and Generative AI Practices and is registered with SkillsFuture Singapore under course reference TGS-2025060472.

Who This Training Is Built For

The course is pitched at the beginner level — no coding or data science background required. It is designed for the people who are already using AI tools on the job and need a structured framework for doing so safely. Typical participants include administrative executives, marketing teams, compliance staff, junior analysts, and customer service professionals.

At Kajima, participants left with a concrete understanding of where the legal boundaries lie and how to structure internal AI usage policies — a foundation for responsible adoption rather than a blanket ban.

Funding: WSQ Subsidy + SFEC Makes This Near Zero-Cost

Cost is rarely a barrier for Singapore-registered companies. This course qualifies for two stacking funding schemes:

  1. WSQ Training Grant — eligible Singaporeans and PRs receive a 70–90% course fee subsidy, bringing the net fee down to as low as S$156 per participant (full fee: S$436 inclusive of GST).
  2. SkillsFuture Enterprise Credit (SFEC) — companies that qualify can claim up to S$10,000 to offset out-of-pocket training costs, stacked on top of the WSQ subsidy.

For most SMEs and mid-sized firms, combining WSQ and SFEC means the effective cost of training an entire department on AI governance is negligible. There is no good reason to delay.

AI Governance Is Not a Nice-to-Have

Singapore's regulatory direction is clear. IMDA's AI Governance Framework, MOM's guidance on technology-assisted hiring, and the PDPA's evolving advisory guidelines all point toward the same expectation: organisations must be able to demonstrate that they have taken reasonable steps to govern AI use.

Training is one of the most defensible evidence points. A staff complement that understands PDPA obligations in the context of AI, recognises copyright risk, and knows how to anonymise data before prompting — that is an organisation that can withstand scrutiny.

Run This Course for Your Organisation

We conduct Responsible Generative AI Basics as public runs and as dedicated corporate sessions — on-site at your premises, in our training centre, or via synchronous Zoom. Group bookings typically attract priority scheduling and can be customised with your internal AI use-case examples.

Ready to build your AI governance foundation? View the full course details and enrol here, or contact Dr. Alfred Ang to discuss a corporate run for your team.