According to PANews, Apple's research team has introduced an advanced open language model, OpenELM. This model uses a hierarchical scaling strategy to effectively distribute parameters in each layer of the transformer model, thereby improving accuracy. For instance, with a parameter budget of about 1 billion, OpenELM's accuracy has increased by 2.36% compared to OLMo, while the required pre-training tokens have been reduced by half.

Unlike previous practices that only provided model weights, inference code, and pre-training on private datasets, OpenELM includes a complete framework for training and evaluating language models on publicly available datasets. This includes training logs, multiple checkpoints, and pre-training configurations. In addition, they have also released the code to convert the model into the MLX library, allowing for inference and fine-tuning on Apple devices.

Earlier in February, Apple CEO Tim Cook stated that Apple's generative AI feature would be launched 'later this year'. There are rumors that the upcoming release of iOS 18 in June could be the 'biggest' update in the history of Apple's iOS, and the first AI iPhone device is also expected to be launched in September.