
Charles & Keith: A Two-Day Generative AI Problem-Solving Workshop with 20 Cross-Functional Staff
On 30 March, Dr Alfred Ang ran a two-day WSQ-aligned workshop at Charles & Keith on Innovative Problem Solving with Generative AI. Twenty staff from Operations, Marketing, Finance, and HR brought their real frustrations — stock imbalance, slow marketing approvals, talent retention — and left with concrete GenAI workflows for each.
TL;DR — On 30–31 March 2026, Dr Alfred Ang delivered a two-day WSQ-aligned workshop on Innovative Problem Solving with Generative AI to ~20 staff from Charles & Keith. Participants came from Operations, Marketing, Finance, and HR — and brought four very different but very real problems: physical stock imbalance, slow marketing-approval cycles, talent retention, and reporting overload. Over two days the group prototyped a working GenAI workflow against each one. Run this workshop in your company →
About Charles & Keith
Charles & Keith is a Singapore-headquartered global fashion brand with retail and e-commerce presence across Asia, Europe, the Middle East and the Americas. The brand has been investing visibly in digital transformation — moving from a traditional retail operations model to a data-driven one — and the GenAI workshop sat inside that broader programme. The leadership ask was specific: not a generic "what is ChatGPT" tour, but a working session that turned four cross-functional pain points into prototype GenAI workflows in two days.
Who attended
The room was deliberately heterogeneous — 20 staff drawn from four functions, with no IT or data-science gating on attendance. This is the configuration that makes GenAI training genuinely useful: the people closest to the problem are the ones designing the solution.
- Operations — store and warehouse coordinators dealing with inventory visibility.
- Marketing — campaign and content leads navigating multi-stage approval flows.
- Finance — analysts running monthly reporting and budget reconciliation.
- HR — talent partners working on retention, onboarding, and internal mobility.
The mix matters. When marketing explains the manager-approval bottleneck in front of finance, finance recognises a similar pattern in their own purchase-order flow — and the GenAI solution gets reused, not rebuilt.
The four problems we tackled
1. Stock imbalance between physical stores
A perennial retail headache: one store sells out of a SKU while a sister store sits on excess. The team had the data, but reconciling the picture across stores, warehouses, and the e-commerce buffer was a manual job done weekly. We prototyped a GenAI assistant that ingests the daily stock and sales export, identifies SKU/store pairs likely to be imbalanced within the next 7 days, and drafts a transfer recommendation in plain language for the operations lead to approve.
2. Slow managerial approval for marketing campaigns
A typical campaign brief went through 4–6 approval steps, with rework loops at each. We re-framed the problem: the bottleneck was not "approvers being slow" but "approvers being asked the wrong question at the wrong time". The group built a GenAI brief assistant that, given a draft, produces an approval-ready summary in the format each manager prefers — risk profile, budget impact, brand-tone check — so the approver replies in minutes rather than days.
3. Talent retention
HR brought a real, sensitive problem: identifying flight risk early and acting on it before exit. We did not build a "predict who will quit" model — that would have been wrong on both ethics and accuracy. Instead the group built a GenAI assistant that helps managers prepare for monthly one-to-ones: synthesising recent feedback, recognising patterns in language (workload, growth signals, engagement), and suggesting specific conversation prompts. The assistant supports the manager; it does not replace the conversation.
4. Reporting overload in finance
Finance was spending two days a month converting raw transactional data into a narrative report for leadership. We prototyped a GenAI workflow that takes the cleaned monthly export, drafts the variance commentary in the company's house style, and surfaces anomalies for analyst review — leaving the analyst's time for the judgement work, not the formatting.
How the two days ran
- Day 1 morning — orientation to GenAI capabilities and, importantly, limitations. We grounded everyone in the same vocabulary (prompt, context window, hallucination, retrieval, agent) so the rest of the workshop did not stall on jargon.
- Day 1 afternoon — problem framing. Four cross-functional sub-teams refined a real problem into something a GenAI workflow could plausibly help with. Most of the value of GenAI training happens here, not in the tool demos.
- Day 2 morning — building. Each team prototyped a working GenAI workflow against their problem using the tooling provided. Hands-on, not slideware.
- Day 2 afternoon — show-and-critique. Teams presented their prototype to the room; the cross-functional audience pressure-tested each one. By close of day, every team left with a working prototype and a clear list of next steps to harden it.
What worked — and what we changed mid-workshop
Three observations worth recording for any organisation about to run a similar programme:
- Real problems beat hypothetical ones — every time. The group's attention was completely different on Day 1 afternoon (when they were solving their own problem) versus Day 1 morning (when we were demoing). We compressed the demo block by 90 minutes on Day 2 in response.
- Cross-functional teams produce more durable solutions. A pure operations team would have built a stock dashboard. The mixed team built something operations could actually use, because finance and marketing pushed back on assumptions they did not realise they were making.
- The hardest skill to teach is "what GenAI should not do". Refusing to build the retention prediction model, and instead building a manager-support assistant, was the most important moment of the workshop.
The curriculum behind this session
The workshop is anchored on the WSQ-aligned course Identifying and Solving Problems at the Workplace on Tertiary Courses Singapore, customised to bring Generative AI into the problem-solving toolkit rather than treat it as a separate topic. For organisations that want to send individual staff onto a public run before booking an in-house programme, that link is the right starting point. Cross-functional teams who want to build deeper AI capability typically follow up with the AI courses, Python courses, or data science courses on the same catalogue.
Why this format works for digital-transformation programmes
Most corporate GenAI training fails for a predictable reason: it is delivered as a tool tour, divorced from the business. A staff member learns to write a prompt, returns to their desk, and discovers that the actual blocker was never prompt-writing — it was the messy data, the unclear ownership, or the missing decision-rights. The two-day cross-functional workshop format we used at Charles & Keith short-circuits that failure mode by forcing the GenAI conversation to happen against a real problem, with all the relevant functions in the room.
For organisations under active digital-transformation programmes, the model also doubles as a low-risk way to test which problems are GenAI-shaped and which are not. Two days, four prototypes, four clear yes/no signals about where to invest next.
FAQ
Can this workshop be funded under SSG?
Yes — the underlying course is WSQ-aligned and eligible for SSG funding when delivered to qualifying participants under standard Tier 2 conditions. We can scope the funding application alongside the in-house booking. See our CASL / Tier 2 application guide for the broader funding context.
Do participants need a technical background?
No. The Charles & Keith cohort spanned Operations, Marketing, Finance, and HR — none of whom were engineers. The workshop is designed for business teams. Engineers can attend, but the format assumes a non-technical baseline.
What size of cohort works best?
15–24 participants, split into 4–5 sub-teams of 4–6 people each. Smaller than 12 and the cross-functional dynamic weakens; larger than 28 and the show-and-critique block runs over time.
Can the workshop be customised to our industry?
Yes — the Charles & Keith run used four problems chosen by their leadership. Most engagements start with a 30-minute scoping call to surface 4–6 candidate problems before the workshop itself.
What follows the two-day workshop?
Two common paths. The first is a 30-day prototype-to-production sprint on one of the four problems, run by our team alongside the participants. The second is a deeper agent build using one of the open-source stacks covered in our OpenClaw vs Hermes vs Paperclip comparison, deployed via our AI agent deployment service.
What to do next
- Read the underlying course outline. WSQ Identifying and Solving Problems at the Workplace on Tertiary Courses Singapore.
- Book an in-house run. Tell us your industry, the four to six candidate problems on your transformation list, and your preferred dates. We will return an agenda and a fixed-fee proposal within two working days. Book an in-house workshop →
- Scope the wider AI programme. If you already know that GenAI is part of your roadmap, our AI solutions engagement covers strategy through production. Request an AI strategy session →
Tertiary Infotech Academy designs and delivers Generative AI workshops for Singapore companies under active digital transformation — see our AI solutions and WSQ course development services for the surrounding work.
