Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Technology https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Alzheimer’s screening, drones for forest mapping, machine learning in space, more – TechCrunch

Alzheimer’s screening, drones for forest mapping, machine learning in space, more – TechCrunch

Scientific articles are coming too fast for anyone to read them all, especially in the field of machine learning, which now affects (and produces paper in) virtually every industry and company. This column aims to gather the most important recent discoveries and documents – especially in, but not limited to, artificial intelligence – and explain why they matter.

This week, a startup that uses unmanned aerial vehicles to map forests, look at how machine learning can map social media networks and predict Alzheimer’s disease, improve computer vision for space sensors, and other news about the latest technology.

Prediction of Alzheimer̵
7;s disease by speech patterns

Machine learning tools are used to aid in diagnostics in many ways because they are sensitive to patterns that are difficult for humans to detect. IBM researchers have potentially found such patterns in speech that predict the development of the Alzheimer’s disease speaker.

The system only needs a few minutes of plain speech in a clinical setting. The team used a large set of data (Framingham Heart Study) from 1948, allowing the identification of speech patterns in people who would later develop Alzheimer’s disease. The degree of accuracy is about 71% or 0.74 area under the curve for those of you who are more statistically informed. This is far from certain, but the current basic tests are only better than turning coins in predicting the disease so far ahead of time.

This is very important because the earlier Alzheimer’s disease can be detected, the better it can be managed. There is no cure, but there are promising treatments and practices that can slow or alleviate the worst symptoms. A non-invasive, rapid test of people like this can be a powerful new screening tool and, of course, is an excellent demonstration of the usefulness of this field of technology.

(Don’t read the article expecting to find exact symptoms or anything like that – a number of speech features aren’t really something you can look out for in everyday life.)

So-cell networks

Making sure your in-depth training network is aggregated for data outside the learning environment is a key part of any serious ML research. But few try to release a data model that is completely foreign to him. Maybe you should!

Researchers at Uppsala University in Sweden took a model used to identify groups and connections on social media and applied it (not unmodified, of course) to tissue scans. The tissue was processed so that the resulting images gave thousands of small dots representing mRNA.

Typically, different groups of cells, representing tissue types and areas, must be manually identified and labeled. But a graphical neural network designed to identify social groups based on similarities as common interests in cyberspace has proven that it can perform a similar task on cells. (See image above.)

“We use the latest artificial intelligence methods – in particular, graphical neural networks designed to analyze social networks – and adapt them to understand biological models and consistent variations in tissue samples. Cells are comparable to social groups, which can be defined by the activities they share on their social networks, ”said Carolina Welby of Uppsala.

This is an interesting illustration not only of the flexibility of neural networks, but also of how structures and architectures are repeated on all scales and in every context. Both outside and inside, if you please.

Drones in nature

The vast forests of our national parks and tree farms have countless trees, but you can’t put “countless” on the documents. Someone has to make a real assessment of how well different regions grow, the density and types of trees, the extent of diseases or forest fires, etc. This process is only partially automated, as aerial photography and scanning reveal only so much, while on-site surveillance is detailed, but extremely slow and limited.

Treeswift strives to take the middle path, equipping drones with the sensors they need for both navigation and accurate forest measurement. By flying much faster than a walking person, they can count trees, watch for problems, and generally collect a ton of useful data. The company is still at a very early stage after splitting from the University of Pennsylvania and receiving an SBIR grant from NSF.

“Companies are increasingly looking to forest resources to combat climate change, but you don’t have people growing to meet that need,” said Stephen Chen, co-founder and CEO of Treeswift and a PhD in Computer and Information Science (CIS). ) at Penn Engineering said in Penn news. “I want to help every forester do what he does more efficiently. These robots will not replace human jobs. Instead, they provide new tools to people who have the insight and passion to manage our forests. “

Another area where drones make very interesting movements is underwater. Ocean autonomous submarines help map the seabed, track ice shelves and track whales. But they all have a small Achilles heel, as they have to be periodically lifted, loaded and extracted from their data.

Purdue engineering professor Nina Mahmudian has created a docking system that allows submarines to easily and automatically connect for energy and data exchange.

A yellow sea robot (left, underwater) finds its way to a mobile docking station to reload and upload data before continuing a task. (Photo by Purdue University / Jared Pike)

The vessel needs a special cone that can be found and plugged into a station that establishes a safe connection. The station can be an autonomous vessel in itself or a permanent feature somewhere – the important thing is that the smaller vessel can make a stop for recharging and debriefing before moving on. If it is lost (real danger at sea), its data will not be lost with it.

You can see the setting in action below:


Sound in theory

Drones could soon become city life, although we’re probably on our way from automatic private helicopters, which some think are very close. But living under a drone highway means constant noise – so people are always looking for ways to reduce the turbulence and sound coming from the wings and propellers.

Computer model of an airplane with simulated turbulence around it.

It seems to be burning, but it’s turbulence.

Researchers at King Abdullah University of Science and Technology have discovered a new, more effective way to simulate airflow in these situations; fluid dynamics is essentially as complex as you make it, so the trick is to apply your computing power to the right parts of the problem. They were able to make a flow only near the surface of the high-resolution theoretical aircraft, and after some distance it made no sense to know exactly what was happening. Improvements in reality models do not always have to be better in every way – in the end, the results matter.

Machine learning in space

Computer vision algorithms have come a long way, and as they increase their efficiency, they are beginning to be located on the edge rather than in data centers. In fact, it has become quite common for camera-carrying objects, such as phones and IoT devices, to perform some local ML work on the image. But in space is a different story.

Image credits: Cosine

Until recently, doing ML work in space was just too expensive, even to consider. This is power that can be used to capture another image, transmit data to the surface, and so on. HyperScout 2 is exploring the possibility of working with ML in space, and its satellite has begun to apply computer vision techniques immediately to the images it collects before sending them down. (“Here’s a cloud – here’s Portugal – here’s a volcano …”)

For the time being, there is little practical use, but object discovery can be easily combined with other functions to create new uses, from saving energy when there are no objects of interest to transmitting metadata to other tools that may work better if they are informed.

In the old, out in the new

Machine learning models are great for making educated assumptions, and in disciplines that lag far behind unsorted or poorly documented data, it can be very helpful to allow AI to make a first pass so that graduates can use time more productively. The Library of Congress does so with old newspapers, and now Carnegie Mellon University libraries are getting into the spirit.

The photo archive of millions of CMU elements is in the process of being digitized, but to be useful to historians and curious browsers, it must be organized and tagged – so computer vision algorithms are used to group similar images, identify objects and locations. , and perform other valuable basic cataloging tasks.

“Even a partially successful project would significantly improve the collection’s metadata and could provide a possible solution for generating metadata if the archives were ever funded to digitize the entire collection,” said Matt Lincoln of CMU.

A completely different project, but one that seems somehow connected, is the work of a student from the Escola Politécnica da Universidade de Pernambuco in Brazil, who had the idea to try to create some old machine learning maps.

The tool they used takes old line-drawing maps and tries to create a kind of satellite image based on them using a Generative Racing Network; GANs are essentially trying to trick themselves into creating content that they can’t tell the real thing.

Image credits: Polytechnic School at the University of Pernambuco

Well, the results are not what you can call completely convincing, but it is still promising. Such maps are rarely accurate, but that doesn’t mean they’re completely abstract – recreating them in the context of modern mapping techniques is a fun idea that can help make these places look less remote.

Source link