Covers an area of 718,000 square meters! The Secret of Xiaomi Automobile Factory: The roof is covered with photovoltaic panels to generate 16.4 million kWh of electricity every year.

  On April 17 th, Xiaomi technical officer Wei issued a document today, officially revealing the Xiaomi automobile factory.

  Xiaomi Automobile Factory is located in Beijing Economic and Technological Development Zone, covering an area of 718,000 ㎡, including R&D test base, six workshops of stamping, die casting, body, painting, battery and final assembly specially built for new energy vehicles, and a test track with a total length of 2.5km.

  In addition, there is Xiaomi Automobile Factory Store, which is a smart park integrating R&D, production, sales and experience.

  Xiaomi emphasized that the group has always been committed to sustainable development, and Xiaomi Automobile Factory also adhered to this concept at the beginning of its establishment.

  In addition to the control of waste gas and wastewater in production, a variety of green renewable resources have been utilized.

  All the domestic wastewater and production wastewater are brought into the sewage station for treatment, which greatly improves the efficiency of sewage treatment. After pretreatment of production wastewater, mixed sewage treatment, miscellaneous water treatment and reclaimed water treatment system, the proportion of wastewater in the factory can reach 50%.

  The roof of the factory is equipped with a 16.2MW distributed photovoltaic power station with a total area of 154,579 square meters, and the estimated annual power generation is about 16.4 million kWh.

  The photovoltaic project is expected to have a service life of more than 25 years, which can provide renewable energy for Xiaomi automobile manufacturing for a long time, reduce carbon emissions by about 9905 tons annually, and absorb carbon dioxide equivalent to 540,000 trees annually.

Meta is hard and OpenAI, and the domestic "small model" official announces open source. Where is the "Hundred Models War" going?

  Since the beginning of this year, the global Internet giants have set off a "hundred-model war", and Microsoft, Google, Baidu and Ali have come to the end one after another. After more than half a year of competition, technology giants are welcoming a new round of road disputes around the big model ecology: facing the parameter "ceiling", will the future of the big model be closed or open?

  The open source model can run on a home computer.

  On August 3rd, two open source models, Qwen-7B and Qwen-7B-Chat, were put on the domestic AI developer community "ModelScope", which were Alibaba Cloud Tongyi Qianwen’s 7 billion parameter general model and dialogue model respectively. Both models were open source, free and commercially available.

  According to reports, Tongyi Qianwen Qwen-7B is a pedestal model that supports many languages such as Chinese and English, and it is trained on more than 2 trillion token (text unit) data sets, while Qwen-7B-Chat is a Chinese-English dialogue model based on the pedestal model, which has reached the cognitive level of human beings.In short, the former is like a "foundation" and the latter is a "house" on the foundation.

  The actual test shows that the comprehensive performance of Qwen-7B model is good. Among them, on the English proficiency evaluation benchmark MMLU, the score is generally higher than that of the mainstream models with the same parameter scale, even surpassing some models with 12 billion and 13 billion parameter scales. On the Chinese evaluation C-Eval verification set, the model also achieved the highest score of the same scale. Qwen-7B model is also among the best in evaluating GSM8K in mathematical problem solving ability and HumanEval in code ability.

  That is to say,In the tests of Chinese and English writing, solving mathematical problems and writing codes, Qwen-7B model is properly a "master of learning", and its score even exceeds the international mainstream model with the same parameter level.

  Besides, the industry is more concerned about the usability of Qwen-7B model. As we all know, the training and operation of mainstream large models need special AI training chips (such as NVIDIA A100), which are not only expensive, but also as high as 10,000 — per NVIDIA A100; 15,000 dollars, and it is monopolized by countries such as Europe and the United States, and it is almost impossible to buy it in China.The domestic Qwen-7B model supports the deployment of consumer graphics cards, which is equivalent to a high-performance home computer to run the model.

  Thanks to free commercialization and low threshold, the Qwen-7B model has been put on the shelves, which has attracted the attention of AI developers.In just one day, on the code hosting platform GitHub, the Qwen-7B model has been collected by more than a thousand developers, and most of the questioners are Chinese developers.As Alibaba Cloud said in the statement: "Compared with the lively AI open source ecology in the English-speaking world, the Chinese community lacks an excellent pedestal model. The addition of Tongyi Qianwen is expected to provide more choices for the open source community and promote the open source ecological construction of AI in China. "

  Open source or closed?

  In fact, Qwen-7B model is not the first big open source model. In fact, GPT-2, the predecessor of ChatGPT, is also completely open source. Its code and framework can be used for free on the Internet, and related papers can be consulted. However, after ChatGPT spread all over the world, OpenAI chose closed-source development, and the model codes such as GPT-3 and GPT-4 have become the trade secrets of OpenAI.

  The so-called open source is open source code.For example, once the big model is declared open source, anyone can publicly obtain the model source code, modify it or even redevelop it within the scope of copyright restrictions. To make a simple analogy,The source code is like the manuscript of a painting, and everyone can fill in the colors according to the manuscript to create their own artistic paintings.

  Closed source is just the opposite of open source.Only the source code owner (usually the software developer) has the power to modify the code, others can’t get the "manuscript" and can only buy the finished product from the software developer.

  The advantages and disadvantages of open source and closed source are very obvious. After open source, the big model will undoubtedly attract more developers, and the application of the big model will be more abundant, but the corresponding supervision and commercialization will become a difficult problem, which is prone to the embarrassing situation of "making wedding clothes for others".After all, open source considers ecological co-prosperity, and it is difficult to figure out the economic account of how much money can be earned at this stage, and these problems happen to be opportunities to close the source.

  Open source or closed source, this is a big model of life and death, the international giants have given the answer.

  Meta, the parent company of Facebook, released the big model Llama2 last month, which is open source and free for developers and business partners, while OpenAI firmly chose GPT-4 closed source development, which not only can maintain OpenAI’s leading position in the generative AI industry, but also can earn more revenue. According to the authoritative magazine Fast Company,OpenAI’s revenue in 2023 will reach 200 million US dollars, including providing API data interface services and subscription service fees for chat bots.

  Domestic big models have gradually begun to "go their separate ways".Alibaba Cloud’s General Meaning ModelAs early as April this year, it was announced to be open to enterprises, and the open source of Qwen-7B model will go further.ERNIE Bot of BaiduIt has also recently announced that it will gradually open the plug-in ecosystem to third-party developers to help developers build their own applications based on the Wenxin model.

  In contrast, Huawei does not take the usual path. When the Pangu Big Model 3.0 was released, Huawei Cloud publicly stated that,Pangu modelThe full stack technology is independently innovated by Huawei, and no open source technology is adopted. At the same time, Pangu Big Model will gather numerous industry big data (involving industry secrets, etc.), so Pangu Big Model will not be open source in the future.

  The big parameters are still small and beautiful.

  In addition, the open source of Qwen-7B model brings another thought:How many parameters do we need a big model?

  There is no denying that,The parameter scale of the large model is constantly expanding.Take the GPT model under OpenAI as an example. GPT-1 only contains 117 million parameters, and the parameters of GPT-3 have reached 175 billion, which has increased by more than 1000 times in a few years, while the parameters of GPT-4 have exceeded the trillion level.

  The same is true of large domestic models. Baidu Wenxin model has 260 billion parameters, Tencent mixed-element model has reached 100 billion parameters, Huawei Pangu model has been estimated to be close to GPT-3.5, and ali tong Yida model has officially announced 10 trillion parameters … …According to incomplete statistics, there are at least 79 large-scale models with over 1 billion parameters in China.

  Unfortunately, the larger the parameter, the stronger the capability of the large model. At the World Artificial Intelligence Conference, Wu Yunsheng, vice president of Tencent Cloud, has a very appropriate metaphor: "Just like athletes practicing physical strength, weightlifters need to lift 200 kilograms of barbells, and swimmers need to lift 100 kilograms. Different types of athletes don’t need everyone to practice 200 kilograms of barbells."

  As we all know,The higher the parameters of the large model, the more resources and costs are consumed.However, it is not necessary to blindly pursue "large scale" or "high parameters" to deepen the vertical large-scale model of the industry, but to formulate relevant model parameters according to customer needs. For example, the BioGPT-Large model has only 1.5 billion parameters, but its accuracy in biomedical professional tests is better than that of the general model with 100 billion parameters.

  Sam Altman, co-founder of OpenAI, also publicly stated that OpenAI is approaching the limit of LLM (Large Language Model) scale. The larger the scale, the better the model is, and the parameter scale is no longer an important indicator to measure the quality of the model.

  Wu Di, the head of intelligent algorithm in Volcano Engine, has a similar view. In the long run, reducing costs will become an important factor in the application of large models. "A well-tuned small and medium-sized model may perform as well as a general large model in a specific job, and the cost may be only one tenth of the original."

  At present, almost all domestic science and technology manufacturers have got tickets for big models, but the real road choice has just begun.

Gemini Man’s China press conference Ang Lee builds the most expensive actor.

On October 14th, the film "Gemini Man" held a huge China press conference in Shanghai. Not only did the film director Ang Lee, producer Jerry Bruckheimer and starring will smith all appear, but also the film’s heavyweight guest, young Will Smith Junior with special effects, came to the scene in a special way.

This is a digital character created by Ang Lee through special effects in movies, which can be called "the most expensive actor in Hollywood" and also the most painstaking role of Ang Lee. At the scene, Ang Lee also revealed that in fact, after he made a tiger with special effects from The Fantasy Drifting of Pi, he came up with the idea of creating "people". In order to make Junior look more real, Ang Lee not only watched the images of Will when he was young, but also studied Will’s face with a magnification of 6,000 times, and learned every detail by heart, which also made Ang Lee confidently say, "I know his face better than his mother. “

The production process is difficult, and will’s acting is not easy. It is reported that Junior is based on motion capture technology, which requires Will to play Junior from beginning to end. As a n experienced person, it’s so difficult to go back and play an ignorant young man that director Ang Lee can’t help feeling, "This is not through what you say, but through your eyes, so you should get rid of the sense of maturity and leave a sense of innocence. It is even more difficult to play a virgin if you are not a virgin. " While talking, I also asked Will to demonstrate on the spot, such as how to play the difference between a virgin and a virgin. Will’s vivid performance made everyone laugh, and the atmosphere was very happy.

Zoom in 6000 times and study Will’s face.

Ang Lee is confident: he knows better than his own mother.

On that day, the most important guest was 23-year-old Will Smith Junior, a fully digital character created by Ang Lee with 100% special effects. In order to win the trust of the audience, Ang Lee watched almost all the images of Will when he was young. "We magnified his face 6,000 times. I dare not say anything in life, but when it comes to face, I know more about it than his biological mother." Will, who met Junior for the first time, admitted that he was scared by the truth of Junior, a special effects person, and was "completely shocked and surprised".

It is a difficulty that the industry doesn’t want to touch to change into a real person on the big screen. Ang Lee also admitted that "this is the most difficult difficulty in special effects, and it takes a lot of money and effort this time." He even revealed that Junior is very expensive, two or three times more expensive than Will Smith, and can be regarded as the most expensive actor in Hollywood.

Will Smith succeeded in Ang Lee, idolize

"Working with him is my lifelong dream."

As early as 2013, Will Smith called Ang Lee for cooperation from a distance, and idolize’s dream finally came true for many years, which also made Will Smith unable to help but feel, "Working with Ang Lee is my lifelong dream, and it has finally come true this time." The excitement is beyond words, and he is a idolize teenager, very cute.

So how did the two cooperate this time? On-site Will revealed that director Ang Lee got rid of him with a phone call. "Ang Lee called me, and I didn’t listen to what I was going to play on the phone, so I said yes! yes! ",said more, there are some directors in this world, you don’t need any reason, at his name will be promised. Ang Lee also said that the chemical reaction between the two was very good. He bluntly said, "He is not only an excellent actor, but also a big star. I am very moved by his willingness to invest unconditionally."

Playing the innocence of young people "stumps" Will.

Demonstrate how a virgin walks on the spot

Of course, the "cost" behind Will’s success in Ang Lee, idolize is not small. "Gemini Man" can be said to be the most challenging performance of Will since he started filming. He wants to play himself at the same time at two different ages, one is a mature and steady middle-aged man and the other is a young man. Will also said that the most difficult thing this time is to perform two people’s different eyes. "Different people have different eyes. How to convey the young and ignorant state with their eyes is the most difficult thing to grasp."

Ang Lee also stamped Will, which is not easy this time. "As a n experienced person, you should remove the sense of maturity from your eyes and perform innocence. It is really difficult to play a virgin without being a virgin. " Will, who was visited by Ang Lee cue, demonstrated the different ways of walking between virgins and non-virgins, and the performance in place made everyone laugh and admired the powerful actor.

The layout of the scene on that day was also very innovative, and the crisscross of blue and red colors complemented the brilliance of BFC Bund Financial Center. At the red carpet scene at night, many mirrors were put in a more ingenious way. It turns out that the killer played by Will Smith in the movie is afraid to look in the mirror and face himself. But that night, when he walked down the red carpet, he shined in front of the mirror, and the charm of the big star came to his face. Other guests present also signed their names in front of the mirror, and everyone saw their second self in the mirror, echoing the theme of "Gemini". The film Gemini Man will be shown nationwide on October 18th.

Chinese Billiards World Championships: Two "After 00" in China compete for the men’s championship.

  The 6th CBSA Chinese Billiards World Championship was in full swing, and the final lineup was produced on December 14th. In 2019, Zhao Ruliang, the runner-up of the men’s team in the Chinese Billiards World Championship, defeated john young in the "Master-apprentice War", and in the final, he launched a peak confrontation with another "post-00" dark horse Shen Shenyi.

  The opening ceremony of this Chinese billiards world championship was staged in Yushan Sports Center Square, Jiangxi Province on the evening of December 8, with 496 players from 41 countries and regions around the world and officials from Taiwan Association from 12 countries including Russia attending. In recent years, the booming Chinese billiards is becoming a new powerful link between China and the world. (Reporter Jiang Tao Video Source Star Billiards)

At the end of National Highway 318: the reunion dinner for guardians and builders.

China Tibet Net News February 11th is the 30th day of the twelfth lunar month (New Year’s Eve), which is also the 30th day of the Tibetan calendar. At the end of National Highway 318, two people with different identities have spent a different year together.
It is reported that the workers who built National Highway 318 at Zhangmu Port in nyalam county, Shigatse City, Tibet Autonomous Region actively responded to the call of the country to celebrate the New Year on the spot and decided to celebrate the New Year at Zhangmu Port. After learning this news, the police at Nyalam Entry-Exit Frontier Inspection Station offered them Chinese New Year materials and invited them to celebrate the New Year together.
The picture shows the police at Nyalam Entry-Exit Frontier Inspection Station preparing Spring Festival pendants for the construction workers of National Highway 318 at Zhangmu Port.
The picture shows the police at Nyalam Entry-Exit Frontier Inspection Station sending Spring Festival pendants to the construction workers of National Highway 318 at Zhangmu Port.
The picture shows the police at Nyalam Entry-Exit Frontier Inspection Station and the construction workers of National Highway 318 at Zhangmu Port posting Spring Festival couplets together.
"This is the happiest New Year we have spent in Tibet. The police at the border checkpoint are like our neighbors. We feel the taste of home in their big family." The construction workers of National Highway 318 at Zhangmu Port told reporters.
The picture shows that on the New Year’s Eve of February 11th, the police at Nyalam Entry-Exit Frontier Inspection Station invited the construction workers of National Highway 318 at Zhangmu Port to celebrate the New Year together.
The picture shows that on the New Year’s Eve of February 11th, the police at Nyalam Entry-Exit Frontier Inspection Station had a reunion dinner with the construction workers of National Highway 318 at Zhangmu Port.
The picture shows the police at Nyalam entry-exit frontier inspection station playing games with the construction workers of National Highway 318 at Zhangmu Port at the reunion dinner.
The picture shows that after a short gathering, the police at Nyalam entry-exit frontier inspection station waved goodbye to the construction workers of National Highway 318 at Zhangmu Port.
One is the guardian of the sacred land, and the other is the builder of a happy home. They are all using their own practical actions to write a touching patriotic chapter in this lonely border town, and jointly contribute silently to the construction of Zhangmu Port. (China Tibet Network Correspondent/ho yuhang)
Reporting/feedback