Faster Programs, Easier Programming

MIT News, 11/7/2016

Researchers from MIT’s CSAIL and Stony Brook University are researching ways to make using multi-core computers easier. They have created a method for describing, in general terms, the computation task desired, and then automatically converting this to a parallelized program. This makes it easier for domain experts (such as computational biologists or cybersecurity experts) to quickly write programs to support their research or tasks, without having to be parallel-programming experts as well.

AI Predicts Outcomes of Human Rights Trials

University College London/UCL News, 10/24/2016

Artificial intelligence has recently been used to predict (past) judicial decisions in the European Court of Human Rights to a surprising degree of accuracy. This method could potentially be used to automatically identify cases that are likely to involve human rights violations, and is also an interesting example of how artificial intelligence can quantify and even predict human behavior based on pattern recognition.

Can We Open the Black Box of AI?

Nature International Weekly Journal of Science, 10/5/2016

Scientists attempt to understand how computers think and learn in order to verify the reliability of large scale data analysis. This article covers several efforts in the last few years to understand how deep neural nets work. If scientists can understand how computers gather and interpret data in deep learning, these techniques can be used with more confidence, in day to day applications as well as in cutting edge scientific research.

Google Translate Gets a Deep-Learning Upgrade

IEEE Spectrum, 10/3/2016

Engineers at Google are upgrading their Google Translate service to use deep learning, which is an artificial intelligence technique. This is the first time this translation method has been used in a large production environment. The update greatly improves the accuracy of translations, increasing Google Translate’s ability to facilitate communication between speakers of different languages.

Google Open-Sources Show and Tell, a Model for Producing Image Captions

Venture Beat, 9/22/2016

With the help of crowdsourced data, Google AI’s image recognition algorithms are achieving greater accuracy. Objects in photographs are now more accurately described, and are interrelated with other objects in auto-generated captions. With the increase in photo-captioning accuracy, more questions arise about privacy online and on social media.

A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin

The Guardian, 9/8/2016

Beauty.AI developed a set of algorithms to judge photos according to five factors in human standards of beauty; it disproportionately chose photos of white people. The article discusses the potential consequences of emergent bias in algorithms and/or datasets in general, including more consequential examples like predictive policing.

Inferring Urban Travel Patterns From Cellphone Data

MIT News, 8/29/2016

Researchers are using data on the locations people make calls from to model the movement patterns of Boston commuters; the system may replace or supplement surveys of residents. The article discusses the benefits of gathering and processing more data more quickly and cheaply, though students may be able to identify some disadvantages of using call data.

How an Algorithm Learned to Identify Depressed Individuals by Studying Their Instagram Photos

MIT Technology Review, 8/19/2016

Researchers have developed a machine-learning algorithm that achieves 70% recall in identifying depressed individuals by characteristics of their (pre-diagnosis) Instagram photo posts. This is a great example of a medical development with great potential for benefit (early diagnosis and treatment) that also raises serious concerns (privacy, misuse of the information, misprediction). It’s also an example of Mechanical Turk being used as a research platform.