The future of work is your Talent vs GPU

3 minutes
The future of work is your Talent vs GPU

So, I’ve been seeing a lot of posts about “AI in Africa.” How Africa has the youngest population. How we’re going to hit two billion people by 2027. So AI is prime, right? No. We’re not ready. Yes, we have the talent. But no, we are not equipped. Here’s what most people don’t realize. If I want to execute an AI-driven strategy as a business leader today, I have two options:

  1. Pay for compute: GPUs, AI credits, cloud compute.
  2. Hire someone.

We’re at a point where the substitute for your skill is a button I can click. Want to shoot a video ad? I can either pay $500 for Veo 3 or hire a full production team. Do you see what that means?

Your skill isn’t competing with another person it’s competing with compute. And yet, we’re training young talent for a future where compute wins by default. What are we really preparing them for? Let me break this down.

Building a large language model is like constructing a city’s road system. Think highways, dual carriageways, fast lanes. To build them, you need:

  • Caterpillars and trucks
  • Civil engineers and strategy planners
  • Workers to pour cement, dig gutters, shovel gravel

Every lane creates jobs. Every road improves logistics. Every town you develop becomes a micro-economy. AI is the same. To get fast, clean outputs from models, you need to build the roads first; GPUs, data centers, training teams, evaluation teams. When you build these AI “roads,” you create:

  • Jobs in labeling and evaluation
  • Research opportunities
  • Domain-specific architectures
  • Local understanding

That’s the infrastructure layer. But where are our data centers? Where is our funding for model experimentation?All I see are ethics panels, policy webinars, and endless meetups. Ethics, policy, ethics, policy.

Policy framework what?

You’re talking about guardrails when we haven’t even laid the road. Guardrails are great—but they don’t matter if there’s no lane to drive on.

We need to:

  • Set up data infrastructure
  • Fund researchers to build and fine-tune models
  • Encourage young people to contribute to evaluations, training, and development

That’s how you build understanding. That’s how you develop first-principle thinking in AI. We don’t have to wait for billion-dollar labs. Today, it’s cheaper than ever to train small, useful models. Six figures or less.

Use cases are everywhere:

  • Local languages: Igbo, Yoruba, Hausa
  • Health: Malaria, Typhoid, Sickle Cell
  • Education: JAMB, NECO, WAEC prep models

These are models that make sense here. Models trained with local context, for real problems. Why aren’t we building them? If young Africans contribute to the training and evaluation of models, they gain:

  • Ground truth
  • Embedded context
  • Architectural intuition

In a truly AI-native world, this is power. Policy, ethics these should be baked into the base model, not bolted on afterward. If the model learns to show a stop sign under the right conditions, that’s embedded ethics. That’s real design.


Continue Reading