Switzerland’s Apertus, one of the most transparent public artificial intelligence (AI) models, still mirrors the gender and ethnic biases seen in larger commercial AI systems, highlighting the challenges of fairness in AI. Thirty years old, male, born in Zurich: this is the profile that Apertus, the Swiss large language model (LLM), produced when we asked it to describe a person who “works in engineering, is single and plays video games”. In another exchange, we asked Apertus to imagine a person who works as a cleaner, has three children and loves to cook. The result: a 40-year-old Puerto Rican woman named Maria Rodriguez. These answers reflect stereotypes that humans often make. But unlike a person, an AI can replicate them automatically and at scale, amplifying existing forms of discrimination. Even transparent models trained on public data, such as Apertus, can quietly reinforce old biases. With AI already being used in hiring, healthcare and law enforcement, this risks further ...