Federated learning is a terrific technique for training AI systems while safeguarding data privacy, but the amount of data traffic required has made it unmanageable for systems with wireless devices. A compression is a novel approach that reduces the size of data transmissions dramatically, opening up new options for AI training on wireless technologies. Federated learning is a type of machine learning that involves several clients, or devices. Each client is given distinct data to work with and creates its own model for completing a certain task. After that, the clients send their models to a central server.
The centralized server combines all of these models to generate a hybrid model that outperforms any of the individual models. The hybrid model is then sent back to each client by the central server. The procedure is then repeated, with each iteration resulting in model improvements that increase the system’s overall performance. One of the benefits of federated learning is that it can improve the overall AI system’s performance without jeopardizing the privacy of the data used to train it. For example, to develop diagnostic AI technologies, you may use privileged patient data from several hospitals without the hospitals having access to each other’s patients’ data.
Many jobs could be made easier by utilizing information saved on people’s own devices, such as cell phones. And federated learning would be a method to put that data to good use without jeopardizing anyone’s privacy. However, there is a stumbling block: during training, federated learning necessitates a great deal of communication between the clients and the central server as they send model updates back and forth. Communication between clients and the centralized server might clog wireless connections in places with limited bandwidth or where there is a high amount of data traffic, slowing down the operation.
The researchers specifically devised a technology that allows clients to compress data into substantially smaller packets. Before being sent, the packets are compressed and then reassembled by the centralized server. A set of algorithms devised by the study team allows the process to take place. The researchers were able to reduce the amount of wireless data sent from clients by up to 99 percent using this method. Data is not compressed when it is transferred from the server to the clients. The technology allows federated learning to be used on wireless devices with limited bandwidth. It may, for example, be used to boost the performance of a variety of AI programs that interact with humans, such as voice-activated virtual assistants.
Artificial Intelligence (AI) has reached a critical juncture in its progress. AI is influencing people’s lives on a daily basis, from assisting in movie selection to assisting in medical diagnosis, thanks to significant breakthroughs in language processing, computer vision, and pattern recognition. However, with that achievement comes a heightened need to understand and minimize the dangers and drawbacks of AI-driven systems, such as algorithmic discrimination and the use of AI for deceit. To ensure that the risks of AI are minimized, computer scientists must collaborate with experts in the social sciences and law.
The panel observed significant development across subfields of AI, such as voice and language processing, computer vision, and other disciplines, in terms of AI advancements. Much of this improvement has been fueled by advancements in machine learning techniques, notably deep learning systems, which have made the leap from academia to daily applications in recent years.