Understanding the Limits of AI: When Algorithms Fail

By: Chris Dunn

I am lucky to live close to a wonderful institute of learning, Indiana University. I decided it was time to take advantage of some of the opportunities available to me to grow as a developer. So, I have been making an effort this year to attend seminars and colloquiums sponsored by the Computer Science department. I recommend everyone take the time to see what's available locally.

At the end of October I had the great pleasure of attending a lecture by Timnit Gebru, a postdoctoral fellow at Microsoft. She focuses her research on the social implications of AI, the algorithms used and data sets we use in machine learning. The embedded video is similar to the talk I attended.

One big take away from the lecture, was the concept of automation bias, where we assume automated processes produce accurate results. When it comes to creating systems that "think" and predict, the conclusions it draws are heavily influenced by the data sets we use to teach the system (not unlike human learning). If the data sets are biased, or do not completely represent the group for which you are predicting, the results cannot be accurate.

Timnit proposes the idea of Data Sheets for data sets, transparency in algorithms, and oversight to ensure that decisions made based on AI are not only founded on accurate information, but are also socially responsible.

During the lecture I attended, she also rated some of the platforms out there on effectiveness and results.  She gave pretty high marks for ML.NET, Microsoft's Machine Learning platform.  A few months ago I tinkered with a simple Machine Learning and Prediction project using ML.NET.  You can find the code on my GitHub.

Tags: ai machine learning

Copyright 2023 Cidean, LLC. All rights reserved.

Proudly running Umbraco 7. This site is responsive with the help of Foundation 5.