· Lucie Dewaleyne · Blog  · 3 min read

Fei-Fei Li: Ethical AI isn’t about morality, it’s about design

Fei Fei Li

In 2009, while the tech world swore by computational power, a professor from Princeton (then Stanford) made an insane bet, and everyone told her she was wasting her time.

Her name is Fei-Fei Li. Her project: ImageNet.

Indeed, at the time, computers were incapable of distinguishing a cat from a dog. The dominant approach among engineers was to write better algorithms. But Fei-Fei Li had the opposite intuition: the problem wasn’t the code, but the data.

She then spent three years compiling a colossal database of 14 million manually annotated images. Thanks to this titanic effort, machines learned to « see, » triggering the Deep Learning revolution in 2012.

The Flip Side: When the Machine Learns Our Flaws However

In her book The Worlds I See (2023), Fei-Fei Li recounts how the technological dream collided with a brutal sociological reality.

Indeed, by feeding machines with real-world images, we also imparted our biases to them. Algorithms are not neutral; they are mirrors.

The most striking example is surely the fact that search engines and databases trained on these massive volumes began to make troubling associations. For instance: a famous study showed that searching for the term « CEO » returned over 90% images of white men, completely invisibilizing women leaders as well as racialized individuals.

The machine wasn’t « sexist » by intent; it was by statistical learning. It had learned that « authority = male. »

Human-Centered AI Faced with this observation, Fei-Fei Li did not reject technology. She founded the Stanford Institute for Human-Centered AI (HAI) with a radical conviction:

« There is nothing artificial about Artificial Intelligence. It is inspired by people, created by people, and, above all, it impacts people. »

For her, ethics isn’t about bridling AI, but about giving it context. An « ethical AI » is a machine capable of understanding the human intent behind raw data. If we don’t code this nuance, « AI will decide for us, » blindly reproducing past biases instead of helping to build the future.

Ethics is a « Design Challenge »

Why is this crucial for a company today?

Because ethics in tech (AI Ethics) isn’t just a moral « sticker » or a legal constraint. It’s a design challenge.

Building a high-performing solution – whether it’s a recommendation algorithm for e-commerce or a CV screening tool – requires vigilance regarding data quality (« Data Engineering »). A biased AI is an AI that performs poorly, excludes potential customers, and makes bad decisions.

Back to Blog