The research on machine learning and artificial intelligence, now a key technology in almost every industry and company, is too voluminous to study in its entirety. Artificial Intelligence is the next gen as we all know, moving forward this generation is all about AI. The purpose of this perceptron column is to collect some of the most important recent discoveries and papers, especially but not limited to artificial intelligence, and explain why they are important.
This month, Meta engineers described two recent innovations from the company’s research labs: an AI system that compresses audio files and an algorithm that can speed up AI by up to 60 times. Elsewhere, MIT scientists have demonstrated using spatial audio information to help machines better imagine their surroundings by simulating how a listener would hear sound from anywhere in the room.
Meta’s work on compression isn’t exactly entering uncharted territory. Last year, Google introduced Lyra, a neural audio codec trained to compress speech at bitrates. But Meta claims its system is the first to work with CD-quality stereo sound, making it useful for business applications like voice calls.
Using artificial intelligence, a meta-compression system called a codec can compress and decompress audio in real-time on a single CPU core at speeds of 1.5 to 12 kbps. Compared to MP3, Encodec can perform almost 10 times the compression at 64kbps without significant quality loss.
The researchers behind Encodec say that human evaluators preferred the quality of sound processed by Encodec to that processed by Lyra, which suggests that Encodec could ultimately be used to provide better sound in situations where bandwidth is limited or expensive.
As for meta-work on protein folding, it has less commercial potential. But it can be the basis of important scientific research in biology.
Protein structures predicted by the Meta system.
Meta says its AI system, ESMFold, has predicted the structures of about 600 million proteins in bacteria, viruses and other yet-to-be-characterized microbes. That’s more than three times the 220 million structures that DeepMind was able to predict earlier this year, covering nearly every protein in known organisms in DNA databases.
The meta system is not as accurate as DeepMind. Of the approximately 600 million proteins he produced, only a third were of “high quality.” But it predicts structures 60 times faster, which allows structure prediction to scale to much larger protein databases.
Not to be outdone by the meta, the company’s artificial intelligence division is also in this month. He described in detail a system designed for mathematical reasoning. The company’s researchers say their “neural problem solver” has learned to generalize to new and different problems from a dataset of successful mathematical proofs.
Meta is not the first to build such a system. OpenAI has developed its own approach called Lean. It was announced in February. Separately, DeepMind experimented with systems capable of solving complex mathematical problems studying symmetries and knots. But Meta claims that its neural problem solver has solved international math Olympiads five times faster than any previous AI system, and outperformed other systems on widely used math tests.
Meta notes that AI for solving mathematical problems could bring benefits in the fields of software verification, cryptography and even aerospace.
We turn our attention to the work of the Massachusetts Institute of Technology, where research scientists have developed a machine learning model that can capture how sounds in a room propagate through space. With acoustic modelling, the system can learn room geometry from sound recordings, which can then be used to create visual images of the room.
Researchers say this technology could be applied to virtual reality and augmented reality software or robots that need to navigate complex environments. In the future, they plan to improve the system so that it can be extended to new and larger scenes, such as entire buildings or even entire cities.
At Berkeley’s robotics department, two separate teams are increasing the speed at which a four-legged robot can learn to walk and perform other tricks. A team sought to combine best-in-class performance with numerous other advances in reinforcement learning to enable a robot to go from a clean screen to confidently walking on rough terrain in less than 20 minutes in real time.
Perhaps surprisingly, we find that with a few careful design decisions in terms of problem statement and algorithm implementation, a four-legged robot can learn to walk from scratch with deep RL in less than 20 minutes in various environments and conditions. Surface types Most importantly, it does not require new algorithmic components or any other unexpected innovations.” We are aware that the next-gen AI’s and/or robots will be taking over as multi-function use computers in people’s everyday lives and it is indeed a great innovation.
Instead, they choose and combine the most modern approaches and achieve amazing results. You can read the newspaper here.
Another movement project from (fellow TechCrunch) Peter Abbey’s lab is described as “imagination training.” They set up the robot to be able to predict how its actions will play out, and although it is relatively helpless at first, it quickly gains more knowledge about the world and how it works. This leads to a better prediction process which leads to better knowledge and so on in feedback until less than an hour has passed. He learns as quickly as he recovers from being pressed, or “disturbance,” as he calls it. Their work is documented. Here.
Earlier this month, he worked with a potentially more urgent program at Los Alamos National Laboratory, where researchers developed a machine learning method to predict the friction created during earthquakes, making it possible to predict earthquakes. Using a language model, the team says they were able to analyse the statistical properties of seismic signals emitted by a malfunction in a laboratory earthquake machine to predict the next earthquake.
“This model is not about physics, but physics predicts the actual behaviour of the system,” said Chris Johnson. “We are now making predictions about the future based on past data that go beyond describing the current state of the system,” said one of the project leaders.
The researchers say the technique is difficult to apply in the real world because it is not clear whether there is enough data to train a predictive system. But they are nonetheless optimistic about applications that could include predicting damage to bridges and other structures.
Last week, MIT researchers warned that neural networks used to mimic real neural networks should be carefully scrutinized for training bias.
Of course, neural networks are based on how our brain processes and signals information by strengthening connections and combinations of nodes. But this does not mean that artificial and real work in the same way. In fact, the MIT team found that neural network-based simulations of grid cells (part of the nervous system) produce this type of activity only when their creators carefully constrain it. If they were allowed to control themselves, as real cells do, they would not exhibit the desired behaviour.
This is not to say that deep learning models are useless in this context—in any case, they are invaluable. But, as Professor Ila Fite said in a school news report, “They can be powerful tools, but we need to be careful in interpreting them and determining whether they actually make new predictions or even clarify what it is. “It was very careful. The brain is optimizing.” The era of AI is upon us.