嵌入式AI技術的應用漸廣,其中深度學習是目前最常用的演算法之一,此演算法需建立完整精確的訓練模型,推論(Inference)端才能順利發揮效益,在此次論壇中,陽明交通大學電機學院副院長暨嵌入式人工智慧研究中心主任郭峻因就以「嵌入式AI深度學習運算模型之建構與應用」為題,發表精采演說。
陽明交大的智慧視覺系統設計實驗室(NYCU iVS Lab),聚焦於各種智慧視覺研究,自駕車也是其中一環。郭峻因表示,自駕車已成全球汽車與科技兩大產業的共同趨勢,NYCU iVS Lab在此領域的研究包括各種ADAS所需的功能與相關技術,在感測器部分,除了視覺感測器外,其研究內容也包括光達(LiDAR)。他指出,影像辨識目前是AI的主流發展方向,在車用領域,AI也可應用於LiDAR,進行物件偵測與分析。
對於AI的導入建議,他表示開發者必須先行掌握圖資與軟硬體核心技術,再進行AI建模,在此環節郭峻因特別強調,建模時必須採用定點而非浮點運算,方能符合自駕車系統需求。針對目前AI設計趨勢與挑戰,郭峻因則以近期的某電動車事故為例點出問題癥結。日前台灣高速公路發生一起車禍,駕駛人放手讓電動車行駛,電動車卻直接撞擊前方道路上一輛倒臥的貨櫃車,在一般正常狀態下,該品牌電動車可偵測前方車輛,過近就會自動剎車,但在這次事件中,AI無法辨識靜止且呈倒臥姿態的貨櫃車是否為車輛,再加上白色車身影響了其視覺判斷,最終釀成車禍。
從這次事件可以看出目前AI在自駕車上的幾個問題,像是攝影機無法偵測車道車輛、霧與強光會干擾系統識別白色汽車、雷達有可能忽略靜態車輛、相機與雷達兩大感測器整合方式有待改進等,現在NYCU iVS Lab就致力於解決上述問題。
郭峻因緊接著談到嵌入式AI感應核心技術與應用。他指出標準的嵌入式深度學習開發,必須先設定與標示資料、再建構訓練模型。NYCU iVS Lab已針對上述環節推出不同平台,讓AI開發者在不同環節均有快速簡易的工具,協助業者縮短開發時程。
郭峻因表示,NYCU iVS Lab所推出的工具都經過測試,具有高度實用性,以資料的設定與標示為例,NYCU iVS Lab在此部分提供的ezLabel工具,只需要前後兩幀畫面,即可標記整段影像中的物件,大幅減少人工標記工時;ezLabel是網路開放平台,可讓全球各地深度學習專家與一般民眾使用,目前ezLabel 2.3版已累積有超過610位使用者。
模型建構部分,NYCU iVS Lab建構SSD輕量化模型與MTSAN(Multi-Task Semantic Attention Network;多任務語義注意網路)。SSD輕量化模型解決了過去此類模型因錨點(Anchor)密度不足,難以偵測瘦長物品的痛點,NYCU iVS Lab在加入CSPNet後,不僅強化運算速度與準確度,同時運算量與參數量也減少了一半。至於MTSAN則是結合物件偵測技術,利用畫素分割場域,並藉此強化物件特徵,郭峻因指出,光是此動作就可提升4.5%的準確度(mAP)。
自駕車導入可分割場域的MTSAN後,可與前車防碰撞(FCWS)或車道偏移系統(LDWS)整合,精準判斷車道,在山路上行駛時,可以識別彎曲車道線,另外也可加入2D與3D的卷積(Convolution)行為分析技術,用來預測後端車輛的超車方向與可能性。
演講最後郭峻因引述美國未來20年發展AI的藍圖做總結。他表示未來的AI必須與情境整合,同時打造開放性知識場域,集結眾人之力,讓AI可了解人類的智能與反應,以進行有意義的互動,此外AI也必須能自我學習,整合周邊環境的各種資訊,培養應對困難挑戰的能力。
至於自駕車的AI應用,他則指出需強化研發各種感知技術,讓車輛可以精準識別路上各類型物件與其移動的意向,將是未來產學研的重點,透過這些研發,車禍事故發生機率將可大幅降低,進而建構安全可靠的交通場域。
Regarding the recommendation of importing AI, he said that developers must first master the core technologies of graphics, software and hardware, and then perform AI modeling. At this link, Jiun-In Guo emphasizes that fixed-point rather than floating-point operations must be used when modeling. It can meet the requirements of self-driving car systems. In response to the current AI design trends and challenges, Jiun-In Guo took a recent electric vehicle accident as an example to point out the crux of the problem. There was a car accident on a highway in Taiwan a few days ago. The driver let go of the electric car, but the electric car directly hits a recumbent container truck on the road ahead. Under normal conditions, the brand''''''''s electric car can detect the vehicle in front of it when it is too close. It will automatically brake, but in this incident, AI was unable to recognize whether a stationary and lying container truck was a vehicle, and the white body affected its visual judgment, which eventually led to a car accident
From this incident, it can be seen that there are several problems with AI in self-driving cars, such as the camera cannot detect vehicles in the lane, fog and strong light will interfere with the system''''''''s recognition of white cars, and the radar may ignore static vehicles, The integration method of the two sensors of camera and radar needs to be improved, etc. Now NYCU iVS Lab is committed to solving the above-mentioned problems.
Jiun-In Guo then talked about the core technology and application of embedded AI induction. He pointed out that for standard embedded deep learning development, data must be set and marked, and then a training model must be constructed. NYCU iVS Lab has launched different platforms for the above links, allowing AI developers to have quick and easy tools in different links, helping the industry shorten the development timeline.
Jiun-In Guo said that the tools launched by NYCU iVS Lab have been tested and are highly practical. Take the setting and labeling of data as an example. The ezLabel tool provided by NYCU iVS Lab in this section requires only two frames before and after. Mark the objects in the entire image, greatly reducing the manual marking time; ezLabel is an open network platform that can be used by deep learning experts and ordinary people from all over the world. At present, ezLabel 2.3version has accumulated more than 610 users.
In the model construction part, NYCU iVS Lab constructs the SSD lightweight model and MTSAN (Multi-Task Semantic Attention Network). The SSD lightweight model solves the pain points of such models that were difficult to detect thin and long objects due to insufficient anchor density in the past. After the addition of CSPNet, NYCU iVS Lab not only strengthens the calculation speed and accuracy, but also the amount of calculations and parameters have also been reduced by half. As for MTSAN, it combines object detection technology to use pixels to divide the field and to enhance the characteristics of the object. Jiun-In Guo pointed out that just this action can increase the accuracy (mAP) by 4.5%.
After the self-driving car is imported into the MTSAN, it can be integrated with the front vehicle collision prevention (FCWS) or the lane shift system (LDWS) to accurately determine the lane. When driving on the mountain road, it can identify the curved lane line, and it can also be added 2D and 3D convolution (Convolution) behavior analysis technology to predict the direction and possibility of overtaking of rear-end vehicles
At the end of the speech, Jiun-In Guo quoted the blueprint for the development of AI in the United States in the next 20 years to summarize. He said that the future of AI must be integrated with the context, and at the same time create an open knowledge field, gather the power of everyone, so that AI can realize human intelligence and reactions, in order to carry out meaningful interactions. In addition, AI must also be able to learn by itself and integrate All kinds of information of the surrounding environment, cultivate the ability to deal with difficult challenges.
As for the AI application of self-driving cars, he pointed out the need to strengthen the research and development of various perception technologies so that vehicles can accurately recognize various types of objects on the road and their movement intentions. This will be the focus of future industry, education and research. Through these research and development, the probability of car accidents will be significantly reduced, and then construct a safe and reliable traffic field