Campbell Brown has spent her career seeking correct information, first as a renowned TV journalist, then as Facebook’s first, and only, committed news chief. Now, watching AI reshape how people consume information, she sees records threatening to repeat itself. This time, she’s not waiting ahead for someone else to fix it.
Her company, Forum AI — which she discussed currently with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco — analyze how foundation models carry out on what she calls “high-stakes topics” — geopolitics, mental health, finance, hiring — subjects wherein “there are no clear yes-or-no answers, where it’s murky and nuanced and complicated.”
The concept is to locate the world’s predominant specialists, have them architect benchmarks, then train AI judges to analyze models at scale. For Forum AI’s geopolitics work, Brown has recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The target is to get AI judges to around 90% consensus with those human experts, a threshold she stated Forum AI has been able of reach.
Brown traces the origin of Forum AI, based 17 months ago in New York, to precise moment. “I was at Meta whilst ChatGPT was first launched publicly,” she recalled, “and I consider really shortly after realizing this is going to be the funnel by which all information flows. And it’s not very good.” The implications for her very own children made the instant feel nearly existential. “My kids are going to be clearly dumb if we don’t figure out how to restore this,” she recalled thinking.
What pissed off her most was that right didn’t appear to be anyone’s priority. Foundation model companies, she stated, are “extremely targeted on coding and math,” whereas news and information are more difficult. But tougher, she argued, doesn’t imply optional.
Indeed, while Forum AI commenced analyzing the leading models, the findings weren’t exactly inspiring. She referred to Gemini pulling from Chinese Communist Party websites “for stories that nothing to do with China,” and stated a left-leaning political bias throughout nearly all models. Subtler failures abound too, she stated, inclusive of missing context, missing views, straw-manning arguments without acknowledgment. “There’s an extended way to go,” she stated. “But I also suppose that there are some very easy fixes that might vastly enhance the results.”
Brown spent years at Facebook watching what occurs when a platform optimizes for the wrong things. “We failed at numerous the things we attempted,” she told Fernholz. The fact-checking program she form not exists. The lesson, despite the fact that social media has turn a blind eye to it, is that optimizing for engagement has been lousy for society and left many less less informed.
Her hope is that AI can break that cycle. “Right now it can go either way,” she stated; companies ought to give users what they want, or they could “give people what is actual and what is honest and what’s truthful.” She stated the idealistic version of that — AI optimizing for truth — would possibly sound naive. But she thinks enterprise can be the unlikely ally right here. Businesses using AI for credit decisions, lending, insurance, and hiring care about liability, and “they are going to want you to optimize for getting it right.”
That enterprise requirement is likewise what Forum AI is having a bet its business on, even though turning compliance interest into consistent sales stays a challenge, specially given that a much deal of the present market persists satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate.
The compliance landscape, she stated, is “a joke” When New York City surpassed the primary hiring bias law needing AI audits, the state comptroller found more than half had violations that went undetected. Real assessment, she stated that, needs domain expertise to work through not just recognized scenarios however edge cases that “can get you into problem that people do not think about.” And that work takes time. “Smart generalists are not going to cut it.”
Brown — whose company last fall raised $3 million led by of Lerer Hippeau — is uniquely placed to explain the disconnect among the AI industry’s self-image and the reality for most users. “You hear from the leaders of the big tech corporations, ‘This technology goes to exchange the world,’ ‘it’ll going to put you out of work,’ ‘it’ll treatment most cancers,'” she stated. “But then to a normal person who’s simply using a chatbot to ask fundamental questions, they’re still getting lots of slop and incorrect solutions.”
Trust in AI sits at extraordinary low levels, and she thinks that skepticism is, in lots of cases, justified. “The conversation is form of going on in Silicon Valley around one thing, and a totally different conversation is going on among consumers.”











