Researchers at IBM Research U.K., the U.S. Military Academy and Cardiff University have recently proposed a technique they call Local Interpretable Model Agnostic Explanations (LIME) for attaining a better understanding of the conclusions reached by machine learning algorithms. Their paper, published on SPIE digital library, could inform the development of artificial intelligence (AI) tools that provide exhaustive explanations of how they reached a particular outcome or conclusion.
* This article was originally published here
This Blog Is Powered By Life Technology™. Visit Life Technology™ At www.lifetechnology.com Subscribe To This Blog Via Feedburner / Atom 1.0 / RSS 2.0.
Tuesday, 28 May 2019
Limiting screen use is not the way to tackle teenage sleep problems
Both in Europe and the US, more than 90% of adolescents have their faces buried in screens before bed. Often, this comes at a cost to sleep. Frequent screen users are much more likely to report falling asleep later, sleeping less, and waking during the night. Such difficulties are linked not only to poorer academic performance, but also increased risk of health issues such as diabetes and heart disease in later life.
* This article was originally published here
* This article was originally published here
Artificial intelligence detects a new class of mutations behind autism
Many mutations in DNA that contribute to disease are not in actual genes but instead lie in the 99% of the genome once considered "junk." Even though scientists have recently come to understand that these vast stretches of DNA do in fact play critical roles, deciphering these effects on a wide scale has been impossible until now.
* This article was originally published here
* This article was originally published here
Getting to Mars, whatever it takes
Sending manned missions to Mars is essential, according to Pierre Brisson, the president of Mars Society Switzerland, "because we can." We spoke with him about this challenge while he was at EPFL recently to give a talk.
* This article was originally published here
* This article was originally published here
Bringing human-like reasoning to driverless car navigation
With aims of bringing more human-like reasoning to autonomous vehicles, MIT researchers have created a system that uses only simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.
* This article was originally published here
* This article was originally published here
Subscribe to:
Posts (Atom)