Was New York’s Mass-Text Manhunt Really Unprecedented?

The Verge, 9/20/2016

New York City police used the Wireless Emergency Alert system recently to send out a “Wanted” text message about a bombing suspect, and they plan to use the system for similar purposes in future. However, they have received heavy criticism, mainly saying that the short, pictureless message may have encouraged mass racial profiling, and that overuse of the system might lead to people ignoring it.

A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin

The Guardian, 9/8/2016

Beauty.AI developed a set of algorithms to judge photos according to five factors in human standards of beauty; it disproportionately chose photos of white people. The article discusses the potential consequences of emergent bias in algorithms and/or datasets in general, including more consequential examples like predictive policing.

How an Algorithm Learned to Identify Depressed Individuals by Studying Their Instagram Photos

MIT Technology Review, 8/19/2016

Researchers have developed a machine-learning algorithm that achieves 70% recall in identifying depressed individuals by characteristics of their (pre-diagnosis) Instagram photo posts. This is a great example of a medical development with great potential for benefit (early diagnosis and treatment) that also raises serious concerns (privacy, misuse of the information, misprediction). It’s also an example of Mechanical Turk being used as a research platform.