Dr. Chen Qiang, Senior Technical Director of 360AI Research Institute: Current Status and Breakthrough Points of China's New Generation of Artificial Intelligence Key Common Technology System

From August 30th to 31st, the “OFweek (2nd) China Artificial Intelligence Industry Conference” hosted by the China High-Tech Industry Portal OFweek and the High-Tech Association, opened by the OFweek Artificial Intelligence Network, opened at the Shanghai International Sourcing Exhibition Center. Curtain.

The conference aims to build a professional platform for exchanges and cooperation between enterprises and practitioners in the artificial intelligence circle, bringing together thousands of elites, 1000+ artificial intelligence professionals, such as internationally renowned corporate executives, industry experts and expert analysis institutions in the field of artificial intelligence. Discuss the artificial intelligence problem and the industry's landing, and lay out the big market in the future.

At the meeting, the organizers invited a number of heavyweight guests and business representatives to give speeches. The wonderful speech won the applause from the audience, and the audience was very popular.

In this forum, Dr. Chen Qiang, Senior Technical Director of 360AI Research Institute, shared the status quo and breakthrough points of China's new generation of artificial intelligence key common technology system to thousands of viewers and the media.

At the meeting, Dr. Chen Qiang introduced the three modules of 360 video brain: security (network security, offline security), IOT intelligent hardware (360 camera, 360 children, 360 car networking), content distribution (exploration, short video, live broadcast) ,Information Flow). And how to improve the short video analysis rate, security monitoring daily face comparison times, and the current content structuring, the rapid increase in the number of IOT intelligent hardware devices, the popularity of 4G/5G, and the trend of Internet content traffic. The number of times the AR effect platform can be output.

The following is the live speech of Dr. Chen Qiang. The OFweek artificial intelligence network has been organized and edited without changing the original intention:

Good afternoon everyone, first of all, thank you for the invitation of Vico. I am here to participate in the more interesting venue. First of all, I will explain it a bit. Because the topic was very big at the beginning, then I only watched it for 20 minutes, so I changed it a bit. The topic, in fact, this topic is also in line with the current state of artificial intelligence development, from the introduction of artificial intelligence technology to the current combination of scenes and the combination of specific products to talk about the issue of artificial intelligence landing.

Artificial intelligence background

We talk about artificial intelligence to solve a problem, whether it is AI+ industry or industry + AI, we often discuss this issue from four aspects. The question we are concerned about is what kind of data the artificial intelligence technology gets in what kind of scene, and then combines certain algorithms and computing power to solve specific problems. The average company and the general slightly platform-based companies, the big difference is that platform companies tend to be slightly better in algorithms and computing power, and their technology research and development capabilities will be stronger. But as an entrepreneurial company, he can achieve different results in specific scenarios and specific data levels. So today my keynote speech mainly introduces the things we do inside 360 ​​from the three levels of scenes, concepts and data.

Let me introduce a little bit about why 360 has to do artificial intelligence. How does 360 combine artificial intelligence and its own business? In fact, 360 may be more of a security company in the eyes of the public, but in fact 360's main business has always been two levels, the table is safe, the content is inside, that is, in the traditional Internet content, Artificial intelligence plays a decisive role. The so-called security, we hope to gradually expand from online security to offline security. Going to the content, AI is more empowered because we want AI to make information production, information analysis and acquisition smarter and more efficient. Therefore, in combination with 360 online and offline products, our AI Institute mainly provides a lot of productization solutions.

AI is already a standardized tool in the entire Internet industry. In the past two years, with the development of the Internet to the offline, and the development of the Internet to the efficiency of content distribution, we feel that it is more likely to do some video directions. thing. Our current strategy of piloting a little inside 360 ​​is called two ones. What is it? That is, our core value lies in 360 security. Therefore, the two services of online security and offline construction security are the strategic aspects of our side and the technology. Two ones are actually the two business directions that we are focusing on now, specifically, a business scenario that combines offline security to do hardware, and at the content distribution level, combined with AI technology, so that the efficiency and effect of distribution can be Optimum. Specifically, on the IT hardware side, we have released a lot of products, including the camera with millions of users, and the second and third children's watches.

The online content distribution is the top priority of 360, because this is the battleground between 360 and all Internet companies, so in the search for short video service live and information flow, the AI ​​scene plays in it. A decisive role. The video brain, we think, is only a core technology point in the middle and the core technical solution output, which has a huge boost to these two businesses. First of all, if the hardware is not intelligent and has no understanding of video and voice technology, it can't be distinguished from traditional intelligent hardware. And video analytics is another slogan last year. From short video to live broadcast products, there will be more and more products distributed by Internet content, and AI provides a decisive role.

Representational representation of the video brain

In fact, from the past two or three years, we have exported from a technical point of AI or video analysis to a solution that is gradually transformed into a solution. The so-called technical point output is more about how our common artificial intelligence technology can help an industry, or a new business can be more empowered. But at this stage, with the development of the industry, it needs more than just smart technology, but more can provide a complete solution. So for some of our internal business lines, we have done a lot of industry solutions based on video brains, including security, short video, and privatization on the side. It combines some of the concepts currently spoken on the market, including some scenes at the edge of the cloud, and I will specifically introduce them later.

Our video brain has made a complete solution for short videos. The ecology of short videos is relatively complete, from the production of the author to the interaction between the author and the user. The platform needs to understand and analyze the uploaded video, and then distribute it. The effect of the distribution is finally reflected in the production side. This is a complete ecology.

In the video brain or video analysis, every process we do, every solution is to make a huge push on each node. For example, in the production and enhancement of this piece, it can produce many new paradigms, new, is a high-production tool for many videos, can help users and authors to better express the video presentation. The short video content analysis is the basic necessities of the video website or video app, and then the distribution aspect is more combined with the recommendation algorithm to do more, better and more efficient distribution.

One of the more important things we do here is the video structured analysis of short videos. Through the understanding of audio and video inside the short video, to achieve more structured content. The structured content is an effect on the voiced tags and semantics of the video inside the video to achieve an effect. Take a few clear scenarios that it uses, starting with content review. As we all know, the content industry has one of the biggest risks, or one of the biggest risks on each platform: how to correctly respond to government regulatory issues. Before, people often pay attention to the fact that short video review is actually more efficient. For example, uploading 1 million videos every day may require enough manpower to review, and the machine can be used as an aid.

But in fact, for our practitioners, we feel that accuracy is a more important point. We hope that the role of artificial intelligence technology in it will not only improve efficiency, but also improve accuracy. How to improve accuracy? What we do more internally is the operation and the machine, and the concept of parallel audit is reached. The improvement in accuracy is an additive relationship, not a multiplicative relationship.

In addition, based on the understanding of the content of the video, we can achieve a lot of content association, and can find the target users' interest points more quickly and efficiently, thus providing solutions for the platform side and the platform pusher.

Also introduce 360 ​​internal security monitoring business. The security monitoring of 360 may be a little different from the security monitoring on the market. First of all, the security monitoring inside 360 ​​is a TO C scene, so its characteristics are very large. How large is this amount? About a year ago, the number of online users on our day basically reached a million level, which means that 1 million videos passed our servers in real time. Therefore, another feature is that it is a TO C product, and its security needs are not the same as traditional security. From the inside, it is possible to combine two points. One is to call security to peace of mind to do basic things, including stranger face recognition. It also includes the peace of mind. As a C-side user, he wants to know who is coming in and out of the day, and then we analyze the video of the day. Can you tell me what the whole portrait of the family member looks like in one day? .

In this regard, for example, some users prefer to combine live scenes and have some interaction with children at home. We have also done some work in this area, including testing family members and intelligent automatic wake-up. More importantly, we have millions of users, and the training and results achieved on our own data are not available to many companies in the industry. We have done an experiment, how to use the shared data to do the required models and online systems? Its accuracy for face comparison may be only 78%, which is a big problem encountered in the actual home scene, and combined with our private data to do these things, our final accurate effect can reach 100% Ninety-eight.

The other piece I want to introduce to you is what we have done in the past year - the AI ​​special effects platform. This piece actually combines a lot of technical points, such as the key point positioning of the face. From the current point of view, it has reached the key point of 207 points, and can achieve real-time processing effect on the mobile terminal mobile terminal. In addition, one of its biggest application scenarios is the Internet. The content of mutual entertainment may not be particularly interesting. It means that in live scenes or in camera scenes, it combines the strong interactions of some anchors or video producers. Add some technical things, such as AI maps, such as the key point positioning of the face, you can make some more fun effects.

This piece is a platform on the end, the emphasis is on emphasizing the interaction between people and the outside world. The most important point is face analysis. At present, the earliest face standard library at home and abroad, LFW is less than 99.7%. At the 3.15 evening party last year, we combined with 360 such a security scene to tell you that there are still many unreliable factors in face verification.

At last year's party, we cracked the frequently used functions of human body verification in real time. And what I just talked about is more about the business level. Actually, within 360, there are many focuses on doing artificial intelligence research and development. For example, we focus more on small, fast, and quasi-three-point small boxes. It is said that I hope to design the model, it can achieve better results and efficiency in the cloud and mobile, and even at the edge of the back. Online speed is fast, this is a much more needed thing. This piece of us and the industry's many chip manufacturers and industry insiders combined the program to do this, and predictive permission technology is to deep learning of the deep model, and then we hope that it can achieve a better prediction speed.

We also want to have some characteristics in the artificial intelligence algorithm. The characteristics are more than two aspects. On the one hand, we hope to get some recognition from everyone in the international fair competition.

Over the past eight years, the 360 ​​AI team has basically won more than ten championships or nominations, including ImageNet, the world's computer vision World Cup, and the more famous Pascal competition. In addition, there are some original contributions to the internal industry or to artificial intelligence calculations. At present, our two technologies have been widely used, including a network called NIM in 14 years. It has basically become a standard algorithm for all deep learning, and we used it last year. Dual Path Networks. Last year we also participated in ImageNet's competition on standard object positioning and won the championship.

Standard object positioning is one of the most important tasks of computer vision. Its purpose is to find common objects in general video or images. The number of categories may be more than one thousand. When we participated in this last competition last year, Dual Path Networks won three matches and 14 indicators, all ranked in the top three in the world.

In conclusion, why do we think that video brain or video content analysis is the future trend? First, we can see two major trends, that is, the number of IT intelligent hardware devices. At present, video brain or video content analysis can give many things that ordinary hardware can't do. The other piece we think that with the popularity of 4G and 5G, from the data of the past year, more than 70% of Internet traffic has been converted into video data. This video data needs someone to analyze and needs someone to supervise, which naturally brings a relatively large market, and we hope that the video brain can solve this piece. At present, what we can do, or what we can provide in the industry, including the ability to open up to the industry, including short video content analysis, we have already done the first level of PGC video processing. On the security monitoring side, the daily face comparison is not face detection, it is a million-level online video, 150 million API calls per day. In addition, on the smart side, we have also begun to open to the industry to carry out a promotion, and now there are major mobile phone manufacturers have begun to cooperate with us. Ok, my sharing is over here today, thank you all.

Lion`s Mane Mushroom Extract

Hericium Erinaceus Extract,Lion'S Mane Mushroom Extract,Mushroom Powder,Lions Mane Mushroom Powder

Shaanxi Zhongyi Kangjian Biotechnology Co.,Ltd , https://www.zhongyibiology.com

Posted on