(CTN News) – Alibaba(BABA) launched two artificial intelligence (AI) models capable of understanding images and carrying out more complex conversations on Friday, with visual localization capabilities.
As the competition for developing more sophisticated AI tools increases, Alibaba (BABA) is launching two AI models that are capable of visual localization in response to the increasing competition.
Alibaba’s latest models of Qwen-VL and Qwen-VL-Chat are more capable of understanding and responding to complex visual signals, such as text within images, than Alibaba’s prior models of Qwen-VL and Qwen-VL-Chat.
For instance, Qwen-VL-Chat and Qwen VL are able to decipher text on images of signs, and could respond to requests for directions related to those signs based on the deciphered text.
The Qwen-VL and Qwen-VL-Chat, the two new models that Qwen has launched, are also open source projects, which means that anyone can use them to create their own AI applications based on them with the assistance of Qwen.
Despite the fact that Alibaba will not be earning any licensing fees from the release of the model, the open-source model could help Alibaba attract more users of the model as big tech companies compete with one another for market share.
In a move that comes out just a day after Meta announced a new AI model designed to assist with the writing of code, the company said in a statement that the move “has the potential to make workflows faster and more efficient for developers and lower the barrier to entry for people learning to code.”.
Alibaba built its models based on the large language model that it possesses, Tongyi Qianwen, which possesses the ability to speak both Chinese and English.
Taking a cue from the news on Friday, American depositary receipts (ADRs) ticked 0.3% higher in pre-market trading on Friday as a result of the news.