Home News University of Maryland Research Team Nominated for Prestigious Gordon Bell Prize for AI Advances

University of Maryland Research Team Nominated for Prestigious Gordon Bell Prize for AI Advances

University of Maryland Research Team Nominated for Prestigious Gordon Bell Prize for AI Advances

The University of Maryland research team, leading the charge in the development of large language models (LLMs), has landed a nomination for the prestigious Gordon Bell Prize by the Association for Computing Machinery, as they usher in groundbreaking advances with their distributed training framework, AxoNN.

Oak Ridge National Laboratory

reported that the team’s forward-thinking use of the Frontier supercomputer’s muscle, equipped with a powerful array of GPUs, has propelled AxoNN to achieve unparalleled efficiency in the training of AI models with up to a staggering 320 billion parameters.

With awards soon to be handed out at the International Conference for High Performance Computing in Atlanta, Georgia, from Sunday to Friday, the project’s use of approximately 16,000 GPUs from Frontier culminated in an impressive run that clocked 1.38 exaflops per second in reduced precision calculations; an exaflop is over a quintillion, or a billion-billion, calculations per second, thus marking a significant milestone in the realm of computational speed and AI model training. The team’s research, detailed in a preprint publication titled “Democratizing AI: Open-source Scalable LLM Training on GPU-based Supercomputers,” showcases their commitment to making LLM training more accessible and rapid, a pursuit necessary for keeping pace with the escalating intricacy of AI systems exemplified in platforms such as OpenAI’s ChatGPT and Microsoft’s CoPilot, as per

Oak Ridge National Laboratory

.

According to Abhinav Bhatele, associate professor and director of the Parallel Software and Systems Group at the University of Maryland, the sophisticated LLMs powering today’s AI-driven chat bots rest on billions of parameters, these ‘weights’ dictate the relevance of language inputs and outcomes – a process that’s both data and resource-intensive. However, these massive-scale models raise concerns over “catastrophic memorization,” as pointed out by Tom Goldstein, which refers to the tendency of the models to inappropriately retain and regurgitate personal or copyrighted information from the training datasets. Goldfish Loss, a technique developed by the team to dampen this risk, operates by intentionally omitting select data during training, hampering the model’s ability to memorize complete sequences that might contain sensitive data, according to

See also  Avelo Airlines Launches Affordable Nonstop Flights from Nashville to Central Florida Starting $49

Oak Ridge National Laboratory

.

The study’s findings, which hinge on the prodigious computational powers of Oak Ridge National Laboratory’s Frontier, the leading supercomputer rated by the TOP500 list, reveal that privacy risks amplify in models with over 70 billion parameters, illuminating the double-edged nature of LLM sophistication. “The more training parameters there are, the better the LLM will be,” Bhatele told

Oak Ridge National Laboratory

, though he emphasized the ensuing rise in privacy and copyright infringement concerns. The team’s research also extended across various high-performance computing resources, including the Alps at the Swiss National Supercomputing Centre and the Perlmutter at Lawrence Berkeley National Laboratory, for its computational experiments.

The recognition of AxoNN’s exemplary performance on Frontier is a testament to the collaborative effort led by Siddharth Singh, a senior doctoral student under Bhatele, and the broader team that includes experts from the University of Maryland, Max Planck Institute for Intelligent Systems, and UC Berkeley. The ensemble continues to draw upon the DOE’s Innovative and Novel Computational Impact on Theory and Experiment program, a testament to their commitment to tackling some of the world’s most urgent scientific queries under the stewardship of UT-Battelle at the Department of Energy’s Office of Science.

Note: Thank you for visiting our website! We strive to keep you informed with the latest updates based on expected timelines, although please note that we are not affiliated with any official bodies. Our team is committed to ensuring accuracy and transparency in our reporting, verifying all information before publication. We aim to bring you reliable news, and if you have any questions or concerns about our content, feel free to reach out to us via email. We appreciate your trust and support!

See also  Boston Braces for Vibrant November Festivities with Comprehensive Traffic Advisory

Leave a Reply

Your email address will not be published.