Web Accessibility Support
News

Howard Students Prove They're ‘One of One’ With Win at Inaugural Microsoft AI Policython

Howard Microsoft AI Policython team

During the 2025 Congressional Black Caucus Week on Capitol Hill, the Truth and Service Solutions Inc. team of Howard students demonstrated what makes the university “one of one” as they took first place at the inaugural Microsoft AI Policython. 

Supported by Howard’s Center for Applied Data Science and Analytics Executive Director Dr. Talitha Washington, the team was made up of junior psychology major Janeen Louis, junior political science and economics major Fatumata Dia, senior computer science major Kyla Hockett, Junior computer science major Soluchi Fidel-Ideabuchi, and senior mathematics major Sydney Helstone. Organized by the Black at Microsoft – DMV Chapter, the contest challenged students from Howard, the University of the District of Columbia, and Coppin State University to tackle real-world issues revolving around AI as they prepared and pitched policies based on hypothetical scenarios. Working directly with mentors from Microsoft, the teams gained invaluable experience in all stages of policymaking, from identifying an issue to researching, drafting, and defending a solution in front of real-world experts. 

Truth and Service Solutions Inc. team presenting
The Truth and Service Solutions Inc. presenting during the competition

“I hope students leave the experience with a deeper appreciation for how policy and innovation intersect in shaping the future of artificial intelligence,” said Washington. “Beyond the competition, I want them to have agency and see themselves as leaders in responsible AI technology and policy innovation. My goal is for them to build both confidence and capacity to create AI technology that makes our world a better place.”

A Simple Question with a Complex Answer

The scenario presented to the Howard team — concerning a bank’s use of an AI app that led to students over-drafting their accounts — particularly resonated with them as students in an increasingly AI-dependent world. 

“The breakdown of the problem was there was a bank and a university that developed an AI budgeting tool that students could download, but the app was giving bad advice and causing students to overdraft,” explained Louis. “So, the question was should the tool be paused, revised, or replaced, and who pays for the errors?” 

First, the team discussed the question of who pays was dependent on whether there was an appropriate disclaimer on the risks of using the tool. If there wasn’t, the bank may have been at fault for misleading students.  The team’s response delved much deeper, showing a level of nuance that reflected their expertise from across computer science, economics and psychology, as well as their own real-world experiences.

Microsoft AI Policython team selfie
Dr. Washington takes a selfie with the team.

“I definitely saw myself reflected in the problem statement, especially as a student who often uses AI tools alongside traditional learning resources to better understand complex concepts in my major,” said Fidel-Ideabuchi. 

The team came up with a multi-pronged proposal that addressed not just the issue of who was at fault but also the ethical, financial, and safety issues underneath the question. Their solutions ranged from ensuring that it was made very clear that the AI can only serve in an advisory role to establishing overdraft protections for all student accounts. The proposal also included bringing in a neutral auditor to oversee the app, establishing “mandatory fun” trainings, and ensuring students are well informed about the risks of using the app. 

While developing their proposals, the team made sure to think about how people actually behave, citing how often we all skip long, dull trainings or skip to the bottom of terms of service. 

“Say you get an iPhone, there’s this long group of texts and then you scroll to the bottom. Most people — all people I think — don't read it; you just press agree and you don't know what the phone is going to do with your information or your data,” said Dia. “We incorporated little fun trainings that included tool tips, small videos to ensure that the user or the students know exactly what the app is doing throughout the entirety of its use.”

All members of the team contributed ideas and occasionally had to be reminded by their Microsoft mentors of a very real aspect of policymaking: budget. 

“We kind of went berserk when coming up with our solutions,” said Louis. “One of the comments that we received from the judges was these are great solutions, but it would cost a lot of money to incorporate. But at that point we thought we were billionaires.”

AI Ethics — Not Just a Hypothetical

Beyond the opportunity to learn how policy is created with industry leaders, each member of the team was inspired to take part in the contest by their own firsthand experiences with AI in their fields. They all agreed that there is an immediate need for better regulations. 

Just in the past two years, Hockett has seen AI transform her work as an intern at Deloitte.  

“Last year I didn’t really use AI that much, and this year it was heavily, heavily pushed that I do,” she explained. “All my coworkers and some interns and some people above me, they were saying, ‘oh, let’s use AI to make the PowerPoints. Let’s use AI to get this document or write this document.’ I think my entire project plan for the summer was written up via AI. I had mixed feelings about that. I think we can do a lot of this just by ourselves.”

For Louis, it was an AI, ethics, and bias class she took during her freshman year that first led her to explore AI policy. 

We, the young population, understand technology a lot, we understand how it impacts us, and we’re the ones who are going to have to live with the this forever.

“I realized it was already impacting my future field,” she said. “For example, people will use ChatGPT for therapy, which is crazy. There’s also actual AI therapy applications being developed. There’s AI therapists being developed as we speak who are supposed to be providing, I think, care that should only be human-to-human based. Therapy should stay with people.”

Even on Capitol Hill, AI is becoming an ever-present feature. To Dia, using the often-unreliable technology in such powerful positions is troubling. Still, she thinks it is too late to go back to a world without AI, which is exactly why clear ethical guidelines and strong policies are necessary. 

“I talked to my professor about this is our lesson and he said that half of the things that we have as humans, including AI, we have because we believe that that is what's going to make us live a good life and live an easier life,” she said. “It’s here to stay because it’s been helping people do that. It’s not going away, whether in policy, in computer science, in psychology, in the environmental scene. I know this is something that the founding father of AI [Geoffry Hinton] said himself: ‘this is  going to be a big thing, and it’s dangerous. So be careful.’”

For Louis, the answers to tackling AI should come from the people most affected, and events like the Policython are the first step. 

“I hope that people can listen to our ideas, listen to what we have to say,” she said. “Because we, the young population, understand technology a lot, we understand how it impacts us, and we’re the ones who are going to have to live with the this forever.”