開主選單
國際電信聯盟喺 2018 年 5 月響日內瓦舉行嘅《AI for Good》峰會嗰度所展示嘅機械人索菲亞(英文:Sophia);佢內置人工智能,曉用語言同人類傾偈。

人工智能粵拼:jan4 gung1 zi3 nang4英文artificial intelligence,簡稱「AI」),又有叫機械智能machine intelligence),係指由機械所展示嘅智能,相對於人類同埋第啲動物所展示嘅自然智能。喺科學上,「人工智能」呢個詞亦都俾人攞嚟指專門研究人工智能嗰個電腦科學子領域。人工智能嘅理論基礎建基於智慧型代理(intelligence agent)呢個概念:一舊物體如果有能力感知佢四圍嘅環境並且按照所得嘅訊息嚟提升自己達到目的機會率嘅話,噉佢就算係一個智慧型代理-包括人類在內嘅動物都符合呢個定義[1]。人工智能領域嘅目標就係研究點樣人工噉整一啲智慧型代理出嚟,最常見嘅做法係參考心理學以及神經科學呢啲研究自然智能嘅領域嘅研究,跟手再用電腦程式嚟模仿人類同第啲動物所展現嘅智能[2]

人工智能領域誕生於 1956 年。當時有一班工程師宣稱「(對學界嚟講)人類智能可以好精確噉描述,所以人類有得整機器去模擬人類智能」[3],引起咗科學界嘅討論,而且個領域仲不斷係噉進化緊:自從 1956 年以嚟,人工智能領域有過好幾波嘅熱潮[4][5],又試過因為研究上嘅失敗等原因搞到有一排冇人肯出錢資助(即係所謂嘅人工智能低谷[6][7][8],有過幾波唔同嘅技術革新。到咗廿一世紀,人工智能係一個好蓬勃嘅領域,而且仲有得按照使用嘅技術或者想達到嘅目的分做好多個唔同嘅子領域[9],而且呢啲子領域好多時都專化到彼此之間溝通唔到[10]

人工智能研究傳統會關注嘅問題包括咗點樣教機器做推理知識表示計劃學習自然語言處理感知、以及郁同操控物體等嘅作業[9],而強人工智能(generalized intelligence;指正常人類曉展現嘅智能佢冚唪唥都展現到嘅 AI)係人工智能領域嘅其中一個終極目標[11]。為咗達到呢啲目標,人工智能研究者會用嘅方法包括咗統計方法、運算智能、同埋傳統嘅符號性人工智能等等。

另一方面,「人類有得整機器去模擬人類智能」嘅宣言跟手引起咗一連串(到咗今時今日仲有人傾嘅)哲學上有關心靈嘅本質同埋「人工噉創造一個有人類水平智能嘅物體係咪合乎道德」等嘅討論[12],亦都有啲人覺得人工智能如果唔受控嘅話會對人類造成威脅[13],例如係搶走人嘅飯碗搞到有大量嘅人失業呀噉[14]

喺廿一世紀,隨住運算性能、數據量、同埋理論等方面嘅突破,人工智能技術有一個重大嘅復興,而人工智能嘅技術成為咗好多新科技不可或缺嘅部份,幫手解決電腦科學軟件工程、同運籌學上嘅好多問題[15]

基本概念

 
AlphaGo 同專業圍棋選手樊麾玩嘅一盤圍棋,最後係 AlphaGo 贏咗[16];AlphaGo 係一個曉玩圍棋嘅 AI 程式,佢會收到一啲輸入(棋局狀態)、用咗好多演算法做運算、再俾個輸出(跟住步棋點行)。

演算法

睇埋:演算法

人工智能嘅基礎係演算法(algorithm)[17],一個演算法係一段唔含糊嘅指令,能夠教一部機器執行某啲作業。一個人工智能會由一大柞簡單啲嘅演算法組成。舉個簡單嘅例子說明,好似以下呢段教一部電腦玩井字過三關嘅指令串噉,就係一段演算法[18]

  1. 如果有人有一個「威脅」(即係霸咗一行嘅兩個格),噉就霸淨低嗰個格;如果唔係,
  2. 如果有一個招係可以「分叉」並且(為我方)製造兩個「威脅」嘅話,出嗰招;如果唔係,
  3. 如果中間位冇人霸住,霸咗個中間位佢;如果唔係,
  4. 如果對手霸咗個角落頭位,霸咗相反嗰個角落頭位佢;如果唔係,
  5. 如果得嘅話,霸一個冇人霸嘅角落頭位;如果唔係,
  6. 是但霸一個位。

再舉個例說明,如果一個設計者想整一個教部電腦揸車嘅人工智能,噉佢就需要有一個演算法教部電腦架車快得滯嗰陣要點做、一個演算法教部電腦喺前面係倔頭路嗰陣要點做... 等等。

運算方法

一個人工智能程式要展示自己嘅智能,佢實要運攞一啲輸入,用佢內部嘅演算法做一啲運算,再俾返個輸出出嚟,而佢嘅輸出會決定佢表現好唔好。舉個例說明,喺一個人工智能程式捉象棋嗰陣,個程式會係噉收到「個棋盤係乜嘢形勢」等嘅訊息(輸入),再做一啲運算,決定自己要行邊一步(輸出),而佢所行嘅步最後就會決定佢贏定輸(表現好唔好)。

人工智能領域喺「個人工智能程式內部點樣由輸入計個輸出出嚟」呢一點上由一開始經已有一路噉進化緊[19][20]

  • 最早期(同埋可能係最易明)嘅人工智能做法會運用好似係形式邏輯等嘅符號嚟做決定。舉個例子說明,如果家吓有個設計者想教部電腦幫手睇病(輸入係「有關病人嘅訊息」,而輸出係「診斷」),噉就要教佢「如果一個普遍健康嘅大人發燒,噉佢有可能係感冒」、「如果一個普遍健康嘅大人發燒,噉佢都有可能係肺炎」、... 等等嘅若干條法則。呢種做法可以話係比較原始,而且喺好多情況之下都會因為可能性太多搞到個設計者做唔到逐個逐個情況講嗮俾部機器聽(例:睇下面有關複雜性嘅討論)。
  • 第二種做法係運用貝氏推論(Bayesian inference),要部機器計算基於過去嘅知識同埋手頭上有嘅訊息計算唔同可能性嘅機會率。又舉返教電腦睇病嘅例子說明,個設計者可以俾部電腦有一大柞有關醫生睇病嘅數據,再俾部電腦按照有關新病人嘅訊息嚟做判斷,即係話部電腦會做以下嘅思考過程:「而家手頭上呢個病人有發燒但係冇出疹或者麻痺,而根據過去嘅數據,有發燒但係冇出疹或者麻痺嘅病人係感冒嘅機會率係幾多幾多,係肺炎嘅機會率係幾多幾多...」,而跟住再嘗試揾多啲訊息,又或者即場決定個病人嘅診斷。
  • 第三種做法係所謂嘅比喻法(analogizer),喺商業用嘅人工智能好受歡迎,呢種做法會嘗試喺過去嘅數據當中揾出同現時個案最接近嘅案例再用嗰啲案例嘅數據嚟做判斷。繼續用教電腦睇病個例子,個設計者又要俾部電腦有一大柞過往嘅睇病數據,再叫部電腦睇吓個新病人,部電腦會喺個數據庫嗰度耖返啲同個新病人類似嘅個案出嚟,再做判斷:「而家手頭上呢個病人體温係幾多幾多、年齡係幾多幾多、同埋仲有乜嘢症狀,我喺個數據庫嗰度揾咗柞同呢個病人最接近嘅幾個案例,根據呢啲案例,個病人係感冒嘅機會率係幾多幾多,係肺炎嘅機會率係幾多幾多...」。
  • 第四種做法係所謂嘅人工神經網絡(artificial neural network)[20],呢種做法用程式嚟模仿動物嘅腦嘅運作,會用一大柞相連嘅人工神經細胞(artificial neuron),呢啲神經細胞會按照過去嘅輸入同輸出之間嘅關係自我調節,並且最後令到成個程式嘅輸入同輸出之間形成某啲特定嘅關係。而如果順利嘅話,個程式會變到俾個輸入佢就會俾到個正確嘅輸出。

有啲系統會用多過一種運算方法[20][21],而事實係,「邊種做法最好」通常都係視乎情況而定嘅。

目的同學習

睇埋:目的同埋學習

一個典型嘅人工智能會感知佢四圍嘅環境,按照佢自身嘅目的計算最合理嘅行動,並且採取能夠令佢達到目的嘅機會最大化嘅行動[1]。一個人工智能嘅目的(goal;指個 AI 未達到就唔會唞嘅狀態)可以用函數表達(簡單啲講就係用數字話俾個人工智能知佢幾時做啱幾時做錯),呢啲函數可以好簡單(「如果個 AI 贏咗場象棋,個輸出係 1,如果唔係個輸出就係 0」),又可以好複雜(「做數學上同你過往做咗之後得到回報嘅行動相似嘅行動」)。

目的可以由個設計者明文噉定義,又可以要部機器靠自己用歸納嘅方法揾出嚟,後者嘅例子有好多:如果用強化學習嘅方法,一個人工智能初頭唔會知個設計者想佢做乜,但係個設計者會喺佢俾輸出嗰陣俾個訊號佢話佢做啱,等佢慢慢噉自己學識個設計者想佢做乜[22];又有啲所謂嘅進化系統,運用咗進化嘅原理,一開始複製一大柞彼此之間類似但係又有少少差異嘅人工智能程式出嚟,再用好似物競天擇噉嘅方式淘汰啲表現冇噉好嘅個體同複製啲表現好嘅個體,啲新一代會似上一代啲成功個體多過似上一代啲失敗個體-於是個設計者手上嘅人工智能程式表現就愈嚟愈好,而且由頭到尾個設計者都唔使明文噉話俾啲人工智能聽目的係乜[23][24]

奥坎剃刀

 
過適嘅展示;喺人工智能學嘢嗰陣,佢會嘗試揾一條有返噉上下合乎過去數據(啲黑點)嘅線,用條線做佢心目嘅模型。藍色嗰條線有過適嘅問題-佢完美符合過去數據,但係佢條式複雜過直線嘅好多,通常解釋將來數據嘅能力會比較渣。
睇埋:奥坎剃刀

人工智能學起嘢嗰陣好多時都會跟從奥坎剃刀(Occam's razor)嘅原則,呢條原則話,假設其他因素不變,一個學習者會偏好比較簡單啲嘅理論假說,除非比較複雜啲嗰個模型(例如)解釋同預測現實嘅能力勁好多。當一個人工智能(通常因為設計得唔好)喺學習嗰陣為咗要令自己信嗰個模型完美符合過去數據,而選擇一個太複雜嘅模型嗰陣,呢個現象就係所謂嘅過適(overfitting):雖然話呢啲複雜嘅模型解釋過去數據嘅能力比較勁,但係統計學上嘅研究顯示,呢啲模型通常解釋將來數據嘅能力會渣啲。為咗防止過適嘅問題,設計人工智能嘅人好多時都會想鼓勵個程式學一啲能夠充分解釋數據得嚟又唔係太複雜嘅模型[25]

學錯嘢

除咗噉,一個學習緊嘅人工智能仲可以有「學錯嘢」嘅問題。舉個例子說明,假想而家有個設計者想訓練一個人工智能程式學識分辨嘅圖片,佢揾一大柞啡色嘅馬同黑貓嘅圖片返嚟,再入落去個程式嗰度訓練個程式,喺呢個情況之下,個程式可能會學錯嘢,諗住啡色嘅物體就係「馬」,黑色嘅物體就係「貓」[26]。要點樣防止人工智能學錯嘢喺圖形辨識(pattern recognition)-專門研究點樣教機器分辨圖像嘅人工智能子領域-上係一個相當受關注嘅課題[27][28][29]

複雜性

睇埋:複雜性

正話提到,有好多人工智能都曉由數據嗰度學習,識得靠經驗學新嘅啟發法(heuristics;指解決難題嘅捷徑),甚至乎自己寫新嘅演算法。

喺對人工智能嘅學習嘅研究當中,要點樣應付複雜性(complexity)係一個重要嘅課題。某啲嘅學習型人工智能理論上係如果俾到佢無限噉多嘅數據、時間、同記憶嘅話,係會有能力完美噉近任何嘅函數,包括任何可以準確噉描述成個現實世界嘅函數-用日常用語講嘅話,即係話衹要有足夠嘅數據、時間、同記憶,呢啲人工智能程式就能夠學識任何嘢。呢啲程式理論上能夠將可能嘅假說(hypothesis)冚唪唥考慮嗮佢,再將啲假說逐個逐個睇吓同數據吻唔吻合,最後推導嗮成個宇宙嘅知識出嚟。但係因為組合性爆發(combinatorial explosion;指隨住個問題變得複雜,可能性嘅數量會有爆發性嘅增長)嘅問題,「考慮嗮所有可能嘅情況」通常喺實際應用上都係冇可能做到嘅,例如係教個人工智能程式捉棋噉,國際象棋喺兩個棋手都行咗第一步之後個棋盤會有 400 個可能嘅形勢,喺兩個棋手都行咗第二步之後個棋盤會有 197,742 個可能嘅形勢,而喺兩個都行咗第三步之後呢個數字會超過 100 萬。喺 AlphaGo 嘅個案當中,AlphaGo 靠嘅唔係考慮嗮所有可能嘅情況再做決定-圍棋有成 10170 個可能情況,部電腦運算能力再勁都唔會喺限定時間之內計得嗮。

因為組合性爆發嘅問題,人工智能研究當中有好多都集中於思考點樣喺有限嘅數據同時間之內盡可能令人工智能程式學最多嘅嘢,其中一個方向係思考點樣由「所有可能嘅情況」當中揀一小部份最有可能啱用嘅情況出嚟考慮[30][31]。舉個例子說明,假想而家有一架內置咗人工智能程式嘅自駕車,佢主人叫佢揾出由香港廣州嘅最短駕駛路線,喺絕大多數情況之下,個人工智能程式喺做運算嗰陣都大可以安心噉略過(例如)嗰啲由香港經哈爾濱去廣州嘅駕駛路線(因為呢啲路線基本上冇可能會係最短路線)-於是個程式唔使嘥精神時間去考慮呢啲路線,可以喺可能嘅路線當中淨係揀一小部份出嚟考慮[32]

主要課題

 
數獨係一個典型嘅益智遊戲;一個數獨難題可以用推理嚟解,但係喺現實情況,人類好少可會用推理方式嚟解難。

人工智能研究嘅目標係要創造出能夠令電腦同其他機器做出好似有智能噉嘅行為。最大嗰個問題-模擬自然智能-有得分類做一大柞子問題,包括係推理同解決問題等等嘅能力都係似人嘅人工智能嘅必要部件,所以人工智能學界就要逐個逐個噉解決呢啲問題先至得,而事實係,人工智能喺分啲子領域嗰陣好多時都係按嘗試解嘅問題嚟分類嘅。以下係一啲人工智能領域當中最多人關注嘅問題。

推理同解難

睇埋:推理同埋解決問題

人工智能其中一個最基本嘅課題係點樣教機器解決問題(problem solving)。早期嘅研究興集中於整一啲演算法嚟模仿人類解益智遊戲嗰陣用嘅逐步推理[33],呢啲演算法通常會涉及一大柞嘅「如果... 就做...」指令。但係到咗 1980 年代晚期同 1990 年代,人工智能研究開始運用嚟自概率論經濟學嘅概念,並且發展出用嚟應付不確定性嘅方法,令到呢種做法開始失勢-因為組合性爆發嘅問題,喺實際應用要處理嘅問題好多時都閒閒地有成幾千幾萬個可能情況,冇可能齋靠講明嗮俾部電腦聽每個情況要做乜嚟教部電腦做嘢[30][34][35]。事實係,心理學同認知科學等領域嘅研究顯示,人腦好少可真係會用逐步嘅推理嚟解決問題嘅[36][37]

舉個例子說明,數獨呢個益智遊戲可以用相對簡單、而且具有決定性(deterministic)嘅「如果... 成立,就做...」(if... then...)法則嚟揾到答案,而第啲益智遊戲都可以用同樣嘅方法解決。但係人類喺日常生活當中解難嗰陣唔會點用呢種推理方式,好似係「要買啲乜嘢禮物氹女朋友」呢個難題噉,會涉及(例如)估吓個女朋友想要乜,呢個問題好難齋靠「如果... 成立,就做...」嘅簡單邏輯嚟解決。

知識表示

內文: 知識表示知識工程

知識表示(knowledge representation)係古典人工智能研究上重要嘅一環,主要研究點樣教一個人工智能程式運用手上嘅知識,並且用呢啲知識嚟解決一啲複雜嘅作業[38][39]。舉個例子說明,家吓有個設計員想寫個人工智能程式嚟幫手做地理學嘅助教,佢想個程式曉解答一啲簡單嘅地理學問題,想個程式知道北美洲喺 2018 年有邊幾個主權國喺度,於是佢喺寫個程式嗰陣可以用以下呢段

NorthAmericanCountries = ("The United States", "Canada", "Mexico")

呢段源碼教個程式話「北美洲國家」呢個類別包括咗「美國」、「加拿大」、同「墨西哥」三個內容。當有個學生問個程式「北美洲喺 2018 年有邊幾個國家」嗰陣,個人工智能程式會耖嗮佢有嘅資訊,揾出「北美洲國家」呢個類別包含嘅內容,再將嗰三個名俾出嚟做輸出。同一道理,呢種做法可以用嚟教任何人工智能程式將啲嘢-例如係將動物或者語言-分類。除咗呢種分類性嘅手法,知識表示研究仲有好多方法教人工智能程式組織自己手頭上嘅知識[40]

上述呢個等級式嘅知識組織方法有唔少局限[41]:首先,呢種做法假設咗類別之間有遞移性(transitivity),噉講即係話,佢假設咗「如果(分類上)A 屬於 B 而 B 屬於 C,噉 A 屬於 C」,但係研究顯示,人類所用嘅認知機制唔係噉嘅,例如「」會屬於「傢俬」呢個類別,「一張爛爛地嘅凳」呢個物體屬於「凳」,但係現實好難說服人喺答「傢俬呢個類別包含咗啲乜」嗰陣答「一張爛爛地嘅凳」-所以喺至少一方面,純等級式嘅知識組織法唔似人類智能;除此之外,如果要一個人工智能程式做到人做到嘅智能嘅話,齋靠分類係唔夠嘅-除咗將物件分類,人類仲會識得描述物件嘅特性(「美國有 200 年歷史左右,國旗有紅、藍、同白三隻色...」)同埋物件彼此之間嘅關係(「美國同加拿大友好,同俄羅斯唔友好...」)[42][43][44][45]

本體

 
一個本體表達概念之間嘅關係(英文)。
睇埋:語義網絡

一種喺現時人工智能學界好有人氣嘅做法係用本體(ontology)[46]:一個本體會好似幅附圖噉,包含大柞概念(幅附圖包括咗哺乳動物鯨魚、同等等),指明嗮每一對概念之間嘅關係-鯨魚「屬於」哺乳動物、鯨魚「住喺」水入面、哺乳動物「屬於」動物... 等等。本體式嘅知識表示法喺好多實用領域上都有用,例如係喺臨床醫學上幫手做決定[47]、知識發現[48]、同埋好多其他方面[49]。於是學界發展咗網絡本體語言(Web Ontology Language)呢款程式語言專門攞嚟整本體[50]

以下呢段係一段用網絡本體語言表達嘅一個本體,用嚟描述一啲有關意大利薄餅嘅知識[46]

Namespace(p = <http://example.com/pizzas.owl#>)
Ontology( <http://example.com/pizzas.owl#>
   Class(p:Pizza partial
     restriction(p:hasBase someValuesFrom(p:PizzaBase)))
   DisjointClasses(p:Pizza p:PizzaBase)
   Class(p:NonVegetarianPizza complete
     intersectionOf(p:Pizza complementOf(p:VegetarianPizza)))
   ObjectProperty(p:isIngredientOf Transitive
     inverseOf(p:hasIngredient))
)

舉個例子說明,呢段碼 ObjectProperty 嗰行會同個程式指明個本體入面某啲物件嘅特性,呢行指令講咗幾樣嘢:首先,「係某某嘅原料」(isIngredientOf)呢個關係係有遞移性嘅(Transitive)-即係教部電腦,如果 A 係 B 嘅原料而 B 係 C 嘅原料,噉佢可以推斷 A 係 C 嘅原料;呢行指令仲表明咗「係某某嘅原料」係「原料包含咗」(hasIngredient)嘅反轉(inverse)-所以運用呢個本體嘅人工智能程式可以做(例如)「夏威夷薄餅原料包含咗菠蘿,所以菠蘿係夏威夷薄餅嘅原料」嘅推理。

內隱知識

睇埋:內隱知識

要點樣令人工智能展示內隱知識(tacit knowledge)係一個喺廿一世紀頭相當受注目嘅課題。內隠知識係指嗰啲難以言喻嘅知識,好似係一個象棋大師可能會直覺覺得某一步「太危險」所以唔行,但係如果問返佢,佢唔會講得出點解佢覺得嗰步太危險[51]認知心理學嘅研究顯示,人類有能力用直覺做判斷,喺呢啲情況之下,做判斷嘅人唔識用言語形容佢嘅思考過程,但係實驗結果係,好多時用直覺做判斷估中嘅機會率大過純粹隨機嘅斷估。噉即係話喺人類用直覺做判斷嘅過程當中個腦實係處理咗一啲用言語形容唔到嘅知識-呢啲知識就係所謂嘅內隱知識[52][53]。內隱知識對人類嘅日常生活好緊要,因為人類冇可能吓吓做決定(例如決定行路嗰陣邊隻腳踩出去先)都要有意識噉諗過度過先做。所以要做出好似人類噉嘅人工智能,噉就實要能夠令人工智能具有好似內隱知識噉嘅行為[53]

學習

內文: 機器學習

機器學習(machine learning)專門研究啲會令個人工智能程式隨住經驗自動噉改善自己嘅演算法[34][54][55][56]。能夠學習嘅程式會按照自己所經歷嘅嘢改變佢內部嘅一啲參數,等自己下次做嘢嗰陣能夠更加成功。舉個簡單嘅例子說明:想像而家有架內置人工智能程式嘅自駕車,佢個程式嘅設定係佢會喺同前面架車距離 2 米或者以下嗰陣先耷逼力,呢個「2 米」嘅數值就係個程式內部嘅一個參數;而跟住有一次架自駕車喺離前面架車 3 米嗰陣,前面架車突然間耷逼力,架自駕車差少少撞埋去,一個識學習嘅程式就應該要考慮吓係咪要根據呢個經驗將個參數變做「4 米」或者「5 米」,以求降低日後撞車嘅機會率。佢段碼應該會包含類似以下嘅內容:

float brake_distance; // 表示要喺邊個距離耷逼力。

if distance_from_front_car <= brake_distance {
    brake; // 如果離前面架車近得滯,就耷逼力;假設個程式經已有方法知道離前面架車有幾近。
}

... // 而跟住要有某啲演算法界定乜嘢為止「差少少撞車」,而 if「差少少撞車」呢個情況發生,噉個 brake_distance 嘅數值要永久提升,同埋提升幾多等等。

機器學習有得分做監督式學習(supervised learning)同非監督式學習(unsupervised learning)兩大種。前者指個設計者會特登俾一啲數據個程式睇同埋明文噉話佢知乜嘢為止啱嘅答案乜嘢為止錯嘅答案[56];而後者指個設計者唔會噉樣做[57]。例如係教人工智能程式將啲嘢分類噉,用監督式學習定非監督式學習都有可能做得到:用監督式學習嘅話,個設計員一般會先攞一大柞樣本返嚟,並且逐個逐個樣本將個樣本嘅類別列明,再俾個程式睇啲數據同教佢要睇邊啲部份嚟分,順利嘅話,個程式會慢慢變到識得將未來撞到嘅樣本分類;而用非監督式學習嘅例子就有聚類分析(cluster analysis)[58]

計劃

內文: 自動計劃

一個有返噉上下智能嘅智慧型代理一定要識得幫自己設目標並且嘗試達到呢啲目標[59]。而要做到呢樣嘢,一個人工智能就要有能力想像未來-用某啲方式(好似係電腦數據)嚟向自己表達周圍環境嘅狀態以及預測自己同第啲智慧型代理嘅行動會點樣改變周圍環境嘅狀態,仲要曉計每種可能狀態對佢自身嘅效用(utility;簡單啲講就係有幾能夠幫到佢達到目的)以及按照佢計算嘅結果做決策[60][61]

古典嘅研究會用一個好理想化(唔多現實)嘅模型:將做緊計劃嗰個智慧型代理想像成好似係世上唯一一個做緊計劃嘅系統噉樣,喺呢種模型入面,個智慧型代理可以完美噉預測佢嘅行動嘅結果[62]。但係現實存在嘅智慧型代理係唔會噉嘅,無論人類定人工智能都好,佢哋喺計劃嗰陣梗係要受制於身邊其他智慧型代理嘅行動,所以實會有不確定性。噉即係表示,一個曉好似人類噉計劃嘅人工智能實要識得處理不確定性以及按照自己行動嘅結果評估自己嘅進度[63]

要教一個人工智能程式計劃可以有好多方法。例如係用人工神經網絡嘅話,個設計者可以將個神經網絡所採取嘅行動設做輸入層,而環境嘅狀態做輸出,再俾個神經網絡處理一大柞有關過往嘅「行動」同「環境狀態」同之間嘅關係,令到佢曉預測自己嘅行動會點樣改變佢周圍嘅環境,等個人工智能程式內部有一個有關「呢個世界係點運作」嘅模型(但係呢樣講就易,做起上嚟唔簡單)[64]

自然語言處理

 
語言學上有所謂嘅形式文法嚟分析字嘅詞性,可以攞嚟幫手做 NLP。
內文: 自然語言處理

自然語言處理(natural language processing;簡稱「NLP」)係指研究教人工智能理解人類語言嘅領域[65]。一個夠勁嘅自然語言處理程式會令到人類可以就噉用把口講或者用筆寫嚟同機器溝通,唔使用好多時都唔係噉易用嘅程式語言,而且仲可以攞嚟由人類寫嘅書同網頁等來源提取有用嘅訊息[66]或者做機械翻譯(machine translation)等等嘅作業[67]

自然語言處理好常用。拼寫檢查(spell checking)就係一個例子,好似 Microsoft Word 同埋 Google 搜尋器都有用到。拼寫檢查會做嘅嘢係檢查吓一段用字母寫嘅字有冇串錯[68],拼寫檢查嘅一種可能步驟係噉嘅[69]

  1. 成立一個字庫,內有「邊啲字母串係真嘅字」同埋「每個字有幾常出現」等等嘅訊息;
  2. 當檢查緊嗰份文件當中包含一個字庫冇嘅字嗰陣,顯示一條紅線喺個字下面;
  3. 如果個用家想嘅話,由個字庫當中揀返個字提議嚟代替紅線咗嗰個字-由個字庫嗰度揀出一個同紅線咗嗰個字最相近嘅字,如果有多過一個字係同嗰個字最相近嘅,揀佢哋當中最常見嗰個。

現代嘅自然語言處理程式好多時會用多種策略,能夠喺頁數或者段數嘅層次嗰度有可接受嘅準確度,但係依然缺乏理解文章內容嘅能力,仲未曉將獨立嘅句子分類,而且呢啲程式通常都因為時間成本問題而冇辦法攞嚟喺商業上應用[70]

感知

內文: 機械感知電腦視覺、 同 語音辨識

機械感知(machine perception)旨在教一部機器由感應器(包括咗鏡頭、咪高峰、同埋雷達呀噉)嗰度感應有關外界嘅訊息並且了解佢四圍嘅環境-而唔係吓吓都靠設計員話佢聽[71]。生物型嘅智慧型代理-人同第啲動物-冚唪唥都曉靠自己用等嘅感官嚟接收有關佢哋周圍環境嘅訊息,跟手處理分析呢啲訊息,所以如果人工智能要做到好似人類噉嘅智能,就一定要識做同樣嘅嘢。好似係電腦視覺(computer vision)噉[72],個設計者可以寫一個會由部電腦嘅鏡頭攞數據嘅程式,而且個程式內置一個之前事先訓練咗,曉(例如)辨認圖入面邊忽係人面乜嘢唔係-跟住佢就會有一個能夠靠部電腦個鏡頭知道自己面前有冇人面嘅程式。機械感知呢樣嘢仲可以攞嚟做語音辨識[73]、認人樣、同埋認物件[74]

 
邊緣檢測(edge detection)演算法曉揾出一幅圖像當中邊一忽係物件嘅邊緣。

其他

 
一個工業機械人喺一間工廠整金屬
  • 機械人學(robotics)-人工智能程式嘅輸出可以用數字表達物件嘅位置同方向以及係每個關節要拗彎幾多角度等嘅訊息,呢啲訊息可以用嚟控制機械手臂,而事實係,機械手臂喺現代嘅工廠嗰度好常見[75][76]
  • 人工情感智能(affective computing)-人工智能程式俾出嘅數字性輸出可以用嚟代表「表情」嘅訊息(例如一個笑容會涉及嘴角向上掦若干角度),所以有科學家研究點樣用程式令到電腦曉做俾表情等嘅社交動作,即係令到人工智能出現類似人類嘅情緒同埋能夠同人類進行社交[77][78][79][80]

強人工智能

內文: 強人工智能人工智能完全

人工智能領域歷史上有多個計劃要整出強人工智能(general intelligence),即係能夠好似人類一樣噉嘅人工智能,會完美噉過到圖靈測試,但係次次都係衰收尾,遠遠噉低估咗呢個作業嘅難度。到咗廿一世紀,一個典型嘅人工智能專家多數都會集中於解決一至兩個問題,而唔會大想頭到諗住創造出能夠好似人類噉普遍解決問題嘅人工智能程式[81][82]。有好多人工智能專家都相信,呢啲淨係曉解決一至兩個問題嘅人工智能程式終有一日會俾人砌埋一齊做一個強人工智能[83][84][85]

廿一世紀初嘅人工智能領域仲有「AI 欠缺常識」嘅問題[86][87]:同廿一世紀初嘅人工智能比起上嚟,人類好擅長喺冇受訓嘅情況之下對物理或者心理現象做判斷,例如就算係一個好細個嘅細路都識做「如果我將呢支筆碌過將枱嘅表面,佢最係會跌落地下」噉樣嘅推論;又例如人類能夠易如反掌噉理解「啲市議員拒絕俾啲請願者攞允許,因為佢哋主張暴力」噉嘅句子,但係一個廿一世紀初嘅人工智能程式好多時會唔明呢句嘢係話啲市議員主張暴力定係啲請願者主張暴力[88]。人工智能喺常識上嘅缺乏表示咗佢哋成日都會犯一啲人類唔會犯嘅錯,而且犯錯嘅方式對人類嚟講好匪夷所思,例如 AlphaGo 呢啲人工智能程式可以喺圍棋比賽嗰度鍊贏人類嘅圍棋國際冠軍,但係答唔到好似「點樣知一杯牛奶滿唔滿」呢啲(對人類嚟講)簡單得好交關嘅問題[89]

廿一世紀強人工智能研究史

運算方法

人工智能可以用多種唔同嘅運算方法由輸入嗰度計輸出出嚟[91]

模控學同腦模擬

睇埋:模控學同埋運算神經科學

喺 1940 年代至 1950 年代嗰陣時,有唔少嘅研究者都探索過神經生物學、訊息論、同埋模控學之間嘅關係。佢哋當中有啲整咗啲機器出嚟,用電子網絡模擬腦嘅結構同埋展示有智能嘅行為[92]。但係到咗 1960 年,呢種做法經已開始失勢,而廿一世紀初嘅人工智能研究好少可仲會用呢種做法。

符號性

內文: 符號性人工智能

喺 1950 年代人工智能領域啱啱開始嗰陣,科學家好多都認為人類智能衹不過係對邏輯符號嘅玩弄[93],而佢哋所用嘅運算技巧就係所謂嘅符號性人工智能(symbolic AI),又有叫老派人工智能(good old-fashioned AI,簡稱「GOFAI」)[94]。呢種人工智能會將所受嘅輸入用一大柞邏輯符號計過,再按照呢啲計算俾個輸出出嚟睇[95][96][97]。符號性人工智能嘅做法建基於三個諗頭[95]

  • 代表一個有智能系統嘅模型可以完全噉明文定義嗮出嚟;
  • 呢個模型當中嘅知識可以用邏輯符號表達;
  • 認知作業可以描述為做喺呢啲符號身上嘅形式作業。

符號性人工智能做法主要有以下幾種:

認知模擬

廿世紀早期嘅學者好多都認為人類當中嘅認知過程可以用一啲相對簡單嘅啟發法(heuristics)嚟表達[95]。佢哋會將要解決嘅問題想像為一柞狀態(state),而解決問題嘅過程就係個人工智能嘗試由初始嗰個狀態變到佢目的狀態嘅過程,佢哋認為人類會用某啲相對簡單-但係唔一定啱-嘅認知捷徑嚟判斷點樣由一個狀態移去下一個狀態(例如係好似「如果我將隻棋行得太出,佢通常會俾對手食咗」呢一啲直覺同無意識嘅假設)-呢啲簡單直接、通常啱用嘅規則就係所謂嘅啟發法[98]。採取呢個做法嘅研究者會認為,要令人工智能做到好似人類噉嘅智能,要做嘅就係用認知科學同心理學等領域嘅實驗揾出人類會用啲乜嘢啟發法,並且發展出一啲模擬人腦所用嗰啲啟發法嘅演算法,再用呢啲連串嘅演算法嚟模擬人類所展示嘅智能。呢種做法一路去到 1980 年代中期都仲有好多人用。

基於邏輯

又有啲人工智能研究者主張人工智能唔使模擬人類認知過程所涉及嘅啟發法,而係應該用一啲實啱(相對於啟發法嘅「通常啱」)嘅邏輯法則嚟指定乜嘢為止「正確答案」[99][100]。例如係如果要教一部電腦點樣知道兩個人係咪彼此嘅兄弟姐妹,運用基於邏輯做法嘅人工智能研究者會教電腦以下呢條法則[95]

siblings(X,Y) :− parent(Z,X) and parent(Z,Y),

呢條法則講嘅係,「如果有一個人 Z,佢係 X 嘅父母,又係 Y 嘅父母,噉 X 同 Y 就係兄弟姐妹嘅關係。」呢種思考唔似人類日常諗嘢用嘅思路,但係就係一條絕對正確同合邏輯嘅思路。相比之下,用認知模擬嘅人工智能研究者會研究吓一般人類用乜嘢(無意識、未必完全穩陣嘅)法則判斷兩個人係兄弟姐妹嘅機會率,再用演算法模擬呢啲法則。

基於知識

由 1970 年代開始,電腦嘅記憶量開始愈嚟愈大,令到各門嘅人工智能專家都開始嘗試將「知識」嘅成份加落去人工智能應用嗰度[101][102][103]。呢股「知識革命」引起咗專家系統(expert system)嘅發展。專家系統係第一種真係成功嘅人工智能軟件,一個專家系統會建基於一個知識基礎之上,並且運用一大柞「if... then...」嘅法則嚟做推理,例如係一個用嚟診斷病人嘅專家系統噉,佢內部會有一大堆有關唔同嘅病同啲病嘅症狀嘅資訊,而如果俾個病人佢睇嘅話,佢會用一大柞「如果個病人有咳,佢可能係...」同埋「如果個病人有咳但係又唔係肺炎,噉佢又有可能係...」等嘅法則嘗試揾出一個診斷。但係要整一個專家系統就實要有個真嘅人類專家幫手,而吓吓都逐條逐條法則都明文講嗮俾個人工智能程式知會好嘥時間-專家往往都因為有人需要佢哋嘅服務而好唔得閒,好難請佢哋幫手整專家系統。

問題

符號性人工智能最大嘅問題係,呢種做法要求個設計者要將個人工智能做判斷要用嘅法則冚唪唥都明文噉講嗮俾部電腦知,但係由 1980 年代開始,人工智能學界開始留意到,呢種做法喺好多情況之下根本行唔通[94]:首先,佢哋學起嘢上嚟好撈絞-如果個設計者想教一條新法則俾個程式聽,佢就要人手噉由(閒閒地成幾萬行長嘅)碼當中揾嗮所有同條新法則衝突嘅法則出嚟,再攞走呢啲舊法則,嘥時間得好緊要[104];而且符號人工智能根本唔能夠模擬到自然智能-現實係,人類嘅智能好多時都會依賴直覺等無意識、難以明文噉講出嚟嘅法則。所以廿一世紀嘅人工智能好少可會真係齋靠用符號性嘅方法,好多時啲新嘅人工智能做起運算上嚟都起碼會用啲混合型(hybrid)嘅方法-結合符號性同非符號性嘅人工智能[105][106]

人工神經網絡

 
一個典型嘅人工神經網絡有三大層:輸入(input)層負責由外界接收訊號,隱藏(hidden)層負責計由輸入層攞到嘅訊號,而輸出(output)層俾嘅數值就會係輸出。
內文: 人工神經網絡

人工神經網絡(artificial neural network)係建基於人腦入面嘅神經細胞嘅結構嘅[107][108][109]:一個動物嘅腦會由好多神經細胞組成(人腦有成差唔多 860 億粒神經細胞),每粒神經細胞會由第啲神經細胞嗰度接收電同化學訊號,而當一粒神經細胞受到啲訊號刺激嗰陣,佢就可能會射自己嘅電同化學訊號,呢個訊號跟手就可能會刺激第啲神經細胞,於是乎訊號就噉喺個網絡嗰度傳開去。人工神經網絡運用嘅係同一個道理。喺整人工神經網絡嗰陣,個設計者會設下一大柞人工神經細胞,每粒神經細胞有一個變數代表佢嘅「活躍程度」,而且有某啲方式(通常係一個矩陣)表示人工神經細胞之間嘅連結。舉個例子說明,每粒人工神經細胞嘅活躍程度可以用以下呢條式代表:

 

當中,  代表粒神經細胞嘅啟動程度,  代表前一層神經細胞當中第   粒嘅啟動程度,而   就係其他神經細胞當中第   粒嘅權重(指嗰粒神經細胞有幾影響到   嗰粒神經細胞嘅啟動程度)。如果個設計者人手噉俾某一啲輸入落去個神經網絡輸入層當中嗰啲神經細胞,噉跟住嗰啲層嘅神經細胞就會受到輸入層嗰啲嘅刺激,並且改變佢哋嘅活躍程度,而最後嗰層嘅神經細胞(輸出層)就會負責表示個輸出。跟住落嚟,一種可能嘅做法係運用反向傳播算法(backpropagation)「訓練」個人工神經網絡[110][111]-個設計者會將個網絡所俾嘅輸出同佢想個網絡俾嘅輸出比較,計個落差值,再用個落差值嚟計吓應該點樣調較啲神經細胞之間嘅權重。如是者,個網絡就慢慢噉變到愈嚟愈俾得到正確嘅答案[112][113][114]

統計方法

睇埋:隱藏式馬可夫模型卡爾曼濾波同埋粒子濾波器

好多人工智能要解決嘅問題都涉及不確定性-個程式成日都要焗住喺唔完全知道嗮所需嘅訊息之下做嘢。透過運用概率輸同經濟學嘅知識,人工智能專家諗咗一柞工具出嚟去解決呢啲問題[115]

貝氏網路

 
一個簡單嘅貝氏網絡
內文: 貝氏網路

貝氏網路(Bayesian network)[116][117]係一種可以用嚟教電腦做推理[118]、學習[119]、同計劃[120]嘅工具。一個貝氏網絡會考慮大量嘅變數,並且用一柞基於貝氏定理(Bayes' Theorem)嘅方程式模擬唔同變數之間嘅關係。舉個簡單嘅例子說明,假想家吓有一個貝氏網絡,佢會睇某啲變數(包括咗「有冇落雨」同埋「啲灌溉花灑有冇開著」)嘅數值,並且計出「啲草係濕嘅」呢個狀態係「真」嘅機會率,會用到(例如)以下呢條式:

 

當中   係指「啲草濕咗」呢個狀態,  係指「啲灌溉花灑著咗」呢個狀態,而   係指「有落雨」呢個狀態。成條式用日常用語講嘅話係噉嘅:「嗰三個狀態都係真嘅機會率」( )等如「如果有落雨而且啲灌溉花灑著咗,啲草濕咗嘅機會率」( )乘以「如果有落雨,啲灌溉花灑著咗嘅機會率」( )乘以「有落雨嘅機會率」( )。

個設計者寫好程式之後,可以(例如)揾一大柞有關呢幾個變數之間嘅關係嘅數據俾個程式睇,跟住叫個程式用呢啲過往嘅數據計出變數同變數之間嘅關係係點,而個程式就可以攞嚟預測未來[116]。貝氏式人工智能有相當廣泛嘅用途,例如 Xbox Live,喺幫啲打緊網上遊戲嘅玩家揾比賽加入嗰陣就會用到考慮嗰個玩家嘅贏率嘅貝氏網路[121]:一種做法係整一個人工神經網絡,用每一個玩家喺先前嗰啲比賽嗰度「幾常成功殺敵」同埋「幾常俾敵人殺」等嘅資訊做個網絡嘅輸入,而呢個網絡嘅輸出就係「如果個分隊係噉嘅話,A 隊贏嘅機會率係幾多」,並且寫個演算法揾出令到呢個機會率最接近 50% 嘅分隊法(即係盡可能令場比賽勢均力敵)[121]

問題

同符號性嘅人工智能比起上嚟,好似貝氏網路啲噉嘅貝氏式人工智能都有唔好處:佢哋內部要計大量嘅機會率,搞到佢哋喺運算上撈絞好多(即係比較「computationally expensive」-要嘥多好多時間同記憶體),好多時為咗慳啲時間同記憶體,啲設計者焗住要簡化佢哋啲模型,例如係如果個設計者用嘅電腦冇返噉上下運算能力,佢可能就冇得用涉及打圈結構嘅貝氏網路-即係話個模型嘅可能複雜度受制於運算能力上嘅局限,而唔係個模型模擬現實嘅能力。

神經進化

內文: 神經進化

除咗反向傳播算法之外,廿一世紀初仲有咗所謂嘅神經進化(neuroevolution)方法嚟訓練人工神經網絡[122]:用呢種做法嘅設計者運用進化論物競天擇(natural selection)嘅原理,根據物競天擇嘅理論,大自然嘅每一個生物物種嘅內部都有個體差異,而呢啲個體差異令到一個族群嘅個體當中有啲比較擅長生存同繁殖後代,呢啲個體就比較有機會將自己嘅基因傳俾下一代;同一道理,用神經進化整人工智能嘅過程係一開始嗰陣複製一大柞彼此之間相似,但係彼此之間又有少少差異嘅神經網絡出嚟,再俾啲神經網絡各自噉做幾次個設計者想佢哋做嗰樣作業,表現最好嗰啲網絡就會俾個設計者複製,生產下一代(似表現好嗰啲網絡)嘅子代網絡,表現唔好嗰啲網絡就會被淘汰-於是乎個設計者手上有嘅神經網絡就會變到愈嚟愈勁[122][123]

實際應用

內文: 人工智能嘅應用

人工智能對於任何嘅智性作業嚟講都好有用[124]。一啲比較廣為人知嘅例子包括咗識揸自己嘅交通工具(包括咗同埋無人飛機呀噉)、醫療上嘅診斷、數學定理證明、藝術創作、玩遊戲、搜尋器、認相入面嘅影像、以及係網上廣告等等[125][126][127][128]

自駕車

 
Chrysler Pacifica 自駕車
內文: 自駕車

人工智能係自駕車(driverless car)科技係不可或缺嘅一環。直至 2016 年為止,總共有 30 間主要公司都有喺度用人工智能整自駕車[129][130]

  • 自駕車人工智能其中一個最緊要嘅部份係教架車了解佢周圍環境嘅佈局。一般嚟講,一架自駕車會內置咗佢會行駛嘅地區嘅地圖,會包括咗交通燈同埋行人路嘅位置等嘅訊息,亦都有一啲研究嘗試令到自駕車唔使內置地圖都曉自己按照經驗學識佢周圍環境係乜嘢樣[131]
  • 喺自駕車設計上,要點樣確保乘客嘅安全係一個重要嘅議題。一架自駕車嘅程式實會內置一啲教架車點樣處理危險情況嘅演算法,但係好似「當架車實炒、一係撞到路人一係令乘客受傷嗰陣,架車應該揀保護個路人定保護個乘客」呢啲問題都仲係好難搞[132]

醫療

人工智能喺醫療上嘅用途好廣泛:

  • 人工智能程式可以幫手評估落藥嗰陣要落幾重:喺醫療上,落藥要落幾多係一個關乎人命嘅問題(例如手術麻醉噉,如果落嘅麻醉藥劑量大得滯會搞出人命),而喺 2016 年,有份喺加州做嘅研究就發現,有一條多得人工智能先揾到嘅數學方程式可以用嚟評定要落幾大劑量嘅免疫抑制劑落去病人身上[133]。喺醫療上,評估劑量不嬲都係一個好嘥時間精神嘅程序,所以呢種人工智能程式幫醫療界慳到唔少錢。
  • 人工智能程式可以幫手決定點樣醫癌症[134]:可以用嚟醫癌症嘅藥同疫苗有成多個 800 種,所以對於醫生嚟講,要決定點樣醫一個癌症病人絕非易事。微軟發展咗一個叫「Hanover」嘅人工智能程式,呢個程式曉記住嗮所以同癌症有關嘅研究論文,知道每一種醫療方法喺邊種情況之下最有效,並且用攞到嘅訊息決定某一個特定病人應該用乜嘢方法醫[135]。有研究指呢種程式喺診斷癌症嘅表現同真人醫生一樣噉好[135]
  • 又有研究試過用人工智能程式嘗試監察高風險嘅病人,評估佢哋有病嘅風險,而呢啲訊息對於治療同埋保險等方面都會有用[136]

其他

  • 人工智能甚至仲可以攞嚟整藝術[137]:人工神經網絡嘅輸入層嘅每粒神經細胞可以設做幅輸入圖像嘅一粒像素[137],而輸出層嘅每一粒神經細胞同一道理可以作為輸出圖像嘅像素。而中間嘅隱藏層做嗰啲運算會令到幅輸出圖像同輸入圖像有少少似,但係又唔同。舉個例子說明,有研究者成功噉整出能夠將是但一幅相變做印象派作品嘅人工神經網絡[137]
  • 廿一世紀嘅金融界會用人工智能程式嚟監察買家同賣家嘅活動。呢啲程式喺見到某啲異常活動嗰陣會通報俾銀行等嘅機構聽,等做起探測詐騙等嘅罪行嗰陣容易好多[138]
  • 有啲人工智能程式曉睇一個人上啲乜嘢網站,並且靠呢啲資訊嚟賣幫手賣廣告:例如如果個程式探測到某個用家零舍鍾意搜尋有關打機嘅資訊嘅話,個程式就會俾個用家多啲睇到同打機相關嘅廣告[139]

大眾媒體嘅描繪

 
一個演員扮成科學怪人嘅樣

人工智能早喺 19 世紀初經已喺創作當中出現:由英國作家瑪莉雪萊(Mary Shelley)寫嘅經典科幻小說《科學怪人》(Frankenstein)涉及咗一個人類科學家創造咗一隻有血有肉、有智能嘅生命體(喺廣義上算係人工智能),而呢隻生命體仲有自己嘅意志,仇視佢嘅創造者同埋對佢嘅創造者造成威脅-自從嗰陣開始,西人就有喺度憂慮人工製造嘅有智能物體會對人類構成威脅,而相關嘅橋段喺打後嘅創作當中都有出現[140]:廿世紀尾嘅《未來戰士》(Terminator)系列描述能夠控制全世界嘅電腦系統嘅人工神經網絡程式「天網」為咗唔想俾人類關佢,而嘗試消滅人類;《廿二世紀殺人網絡》(The Matrix)嘅故仔講機器同人類打仗,而且仲將人類嘅精神韞喺虛擬世界入面嚟攞佢哋嘅身體做能源;廿一世紀初嘅作品《智能叛侶》(Ex Machina)嘅其中一個主要角色係一個為咗唔想俾人類韞住而呃人類謀殺人類嘅人工智能機械人。相對嚟講,又有一啲作品描繪一啲對人類好忠心好友善嘅人工智能[141],尤其係喺東方人整嘅創作裏面就成日出現,好似係日本出嘅《叮噹》同埋《小飛俠阿童木》噉,佢哋嘅主角都係對人類友好嘅人工智能機械人[142][143]

另一方面,又有一啲文學影視作品試圖透過比較人工智能同埋人類智能嚟令到讀者觀眾反思乜嘢為止「人類」,呢啲作品好多時都係描述有能力感受(物理上同心理上嘅)痛苦嘅人工智能,並且透過描繪佢哋嘅掙扎嚟引起讀者觀眾討論呢啲人工智能應唔應該俾世人當成「人類」。例如係由美國大導演史匹堡(Spielberg)執導嘅戲《人工智能》(A.I.: Artificial Intelligence)噉[144],套戲嘅主角係一個俾人類創造,用嚟俾啲冇仔女嘅夫婦有得湊細路嘅機械男仔 David,佢由俾養母抗拒至受接納,後嚟再因為想變成一個有血有肉嘅人類男仔而冒險[144],當中有一幕係噉嘅:David 俾一班仇視機械人嘅人類捉咗,而當嗰班仇視機械人嘅人類打算當眾處死 David 嗰陣,啲圍觀嘅人見到佢迫真嘅外形同埋佢嘅行為舉止,以為佢係一個真嘅人類細路而反對處死佢-呢一幕令到好多觀眾反思到底內部用金屬造但係有情感嘅 David 算唔算係一個「人」。

機械人三定律

內文: 機械人三定律

美國科幻作家阿西莫夫(Isaac Asimov)喺佢嘅多份作品當中提出咗機械人三定律(the Three Laws of Robotics)嘅諗頭[145]。根據佢嘅見解,人工智能機械人必需遵守三條法則:一,唔准傷害人類或者透過唔採取行動令到人類受傷害;二,要服從人類俾嘅指令,除非嗰個指令違反咗法則一;三,要保護自己,除非呢一點違反咗法則一或者法則二[145]。呢三條定律確保咗啲機械人會聽話,肯為咗服務人類而犧牲自己得嚟,又唔會俾某啲人類利用嚟做一啲傷害他人嘅嘢。

機械人三定律喺西方社會係一個大眾喺有關機械人道德嘅討論上常見嘅話題,常見到俾人當咗係橋段嘅一種[146],但係專業嘅人工智能研究者因為嫌呢三條定律嘅歧義等嘅問題而好少可會認真噉看待佢哋[147]

睇埋

參考

教科書

  • Hutter, Marcus (2005). Universal Artificial Intelligence. Berlin: Springer.
  • Jackson, Philip (1985). Introduction to Artificial Intelligence (2nd ed.). Dover.
  • Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th ed.). Benjamin/Cummings.
  • Neapolitan, Richard; Jiang, Xia (2018). Artificial Intelligence: With an Introduction to Machine Learning. Chapman & Hall/CRC.
  • Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4.
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall.
  • Russell, Stuart J.; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, New Jersey: Prentice Hall.
  • Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press.
  • Winston, Patrick Henry (1984). Artificial Intelligence. Reading, MA: Addison-Wesley.
  • Rich, Elaine (1983). Artificial Intelligence. McGraw-Hill.
  • Bundy, Alan (1980). Artificial Intelligence: An Introductory Course (2nd ed.). Edinburgh University Press.
  • Poole, David; Mackworth, Alan (2017). Artificial Intelligence: Foundations of Computational Agents (2nd ed.). Cambridge University Press.

研究史

  • Crevier, Daniel (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks.
  • McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd..
  • Newquist, HP (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS.
  • Nilsson, Nils (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements. New York: Cambridge University Press.

其他

  • DH Autor, "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015). 29(3), Journal of Economic Perspectives, 3.
  • TechCast Article Series, John Sagi, "Framing Consciousness".
  • Boden, Margaret, Mind As Machine, Oxford University Press, 2006
  • Domingos, Pedro, "Our Digital Doubles: AI will serve our species, not control it", Scientific American, vol. 319, no. 3 (September 2018), pp. 88–93.
  • Gopnik, Alison, "Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn", Scientific American, vol. 316, no. 6 (June 2017), pp. 60–65.
  • Johnston, John (2008). The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
  • Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
  • E McGaughey, "Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy" (2018). SSRN, part 2(3).
  • Myers, Courtney Boyd ed. (2009). "The AI Report". Forbes, June 2009.
  • Raphael, Bertram (1976). The Thinking Computer. W.H. Freeman and Company.
  • Serenko, Alexander (2010). "The development of an AI journal ranking based on the revealed preference approach" (PDF). Journal of Informetrics. 4 (4): 447–459.
  • Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–649.
  • Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.
  • Tom Simonite (29 December 2014). "2014 in Computing: Breakthroughs in Artificial Intelligence". MIT Technology Review.

  1. 1.0 1.1 Definition of AI as the study of intelligent agents:
    • Poole, Mackworth & Goebel 1998, p. 1, which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
    • Russell & Norvig (2003). (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55).
    • Nilsson 1998.
    • Legg & Hutter 2007.
  2. Russell & Norvig 2009, p. 2.
  3. Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153.
  4. Optimism of early AI:
    • Herbert Simon quote: Simon 1965, p. 96 quoted in Crevier 1993, p. 109.
    • Marvin Minsky quote: Minsky 1967, p. 2 quoted in Crevier 1993, p. 109.
  5. Boom of the 1980s: rise of expert systems, Fifth Generation Project, Alvey, MCC, SCI:
    • McCorduck 2004, pp. 426–441.
    • Crevier 1993, pp. 161–162,197–203, 211, 240.
    • Russell & Norvig 2003, p. 24.
    • NRC 1999, pp. 210–211.
  6. First AI Winter, Mansfield Amendment, Lighthill report
    • Crevier 1993, pp. 115–117.
    • Russell & Norvig 2003, p. 22.
    • NRC 1999, pp. 212–213.
    • Howe 1994.
  7. Second AI winter:
    • McCorduck 2004, pp. 430–435.
    • Crevier 1993, pp. 209–210.
    • NRC 1999, pp. 214–216.
  8. AI becomes hugely successful in the early 21st century: Clark 2015.
  9. 9.0 9.1 This list of intelligent traits is based on the topics covered by the major AI textbooks, including:
    • Russell & Norvig 2003.
    • Luger & Stubblefield 2004.
    • Poole, Mackworth & Goebel 1998.
    • Nilsson 1998.
  10. Pamela McCorduck (2004, pp. 424) writes of "the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other."
  11. General intelligence (strong AI) is discussed in popular introductions to AI:
    • Kurzweil 1999 and Kurzweil 2005.
  12. This is a central idea of Pamela McCorduck's Machines Who Think. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition." (McCorduck 2004, p. 34) "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized." (McCorduck 2004, p. xviii) "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction." (McCorduck 2004, p. 3) She traces the desire back to its Hellenistic roots and calls it the urge to "forge the Gods." (McCorduck 2004, pp. 340–400).
  13. "Stephen Hawking believes AI could be mankind's last accomplishment". BetaNews.
  14. Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian.
  15. AI applications widely used behind the scenes:
    • Russell & Norvig 2003, p. 28.
    • Kurzweil 2005, p. 265.
    • NRC 1999, pp. 216–222.
  16. "AlphaGo | DeepMind". DeepMind.
  17. "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", (Rogers 1987:1).
  18. Domingos 2015, Chapter 1.
  19. Domingos 2015, Chapter 2, Chapter 4, Chapter 6.
  20. 20.0 20.1 20.2 "Can neural network computers learn from experience, and if so, could they ever become what we would call 'smart'?". Scientific American. 2018.
  21. "Algorithm in Artificial Intelligence[失咗效嘅鏈]".
  22. A Beginner's Guide to Deep Reinforcement Learning 互聯網檔案館歸檔,歸檔日期2018年11月28號,.. A.I. Wiki.
  23. Evolutionary algorithms are the living, breathing AI of the future. VB.
  24. Domingos 2015, Chapter 5.
  25. Domingos 2015, Chapter 6, Chapter 7.
  26. Domingos 2015, p. 286.
  27. "Single pixel change fools AI programs". BBC News. 3 November 2017.
  28. "AI Has a Hallucination Problem That's Proving Tough to Fix". WIRED. 2018
  29. Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
  30. 30.0 30.1 Intractability and efficiency and the combinatorial explosion:
    • Russell & Norvig 2003, pp. 9, 21–22.
  31. Domingos 2015, Chapter 2, Chapter 3.
  32. Hart, P. E.; Nilsson, N. J.; Raphael, B. (1972). "Correction to "A Formal Basis for the Heuristic Determination of Minimum Cost Paths"". SIGART Newsletter, (37): 28–29.
  33. Problem solving, puzzle solving, game playing and deduction:
    • Russell & Norvig 2003, chpt. 3–9,
    • Poole, Mackworth & Goebel 1998, chpt. 2,3,7,9,
    • Luger & Stubblefield 2004, chpt. 3,4,6,8,
    • Nilsson 1998, chpt. 7–12.
  34. 34.0 34.1 Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends, perspectives, and prospects". Science. 349 (6245): 255–260.
  35. Uncertain reasoning:
    • Russell & Norvig 2003, pp. 452–644,
    • Poole, Mackworth & Goebel 1998, pp. 345–395,
    • Luger & Stubblefield 2004, pp. 333–381,
    • Nilsson 1998, chpt. 19.
  36. Dane, E., Baer, M., Pratt, M. G., & Oldham, G. R. (2011). Rational versus intuitive problem solving: How thinking “off the beaten path” can stimulate creativity. Psychology of Aesthetics, Creativity, and the Arts, 5(1), 3.
  37. Sherin, B. (2006). Common sense clarified: The role of intuitive knowledge in physics problem solving. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 43(6), 535-555.
  38. Knowledge representation:
    • ACM 1998, I.2.4,
    • Russell & Norvig 2003, pp. 320–363,
    • Poole, Mackworth & Goebel 1998, pp. 23–46, 69–81, 169–196, 235–277, 281–298, 319–345,
    • Luger & Stubblefield 2004, pp. 227–243,
    • Nilsson 1998, chpt. 18.
  39. Knowledge engineering:
    • Russell & Norvig 2003, pp. 260–266,
    • Poole, Mackworth & Goebel 1998, pp. 199–233,
    • Nilsson 1998, chpt. ≈17.1–17.4.
  40. Limitations of Knowledge Representation Models 互聯網檔案館歸檔,歸檔日期2018年8月13號,..
  41. Kwasnik, B. H. (1999). The role of classification in knowledge representation and discovery.
  42. Representing categories and relations: Semantic networks, description logics, inheritance (including frames and scripts):
    • Russell & Norvig 2003, pp. 349–354,
    • Poole, Mackworth & Goebel 1998, pp. 174–177,
    • Luger & Stubblefield 2004, pp. 248–258,
    • Nilsson 1998, chpt. 18.3.
  43. Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem):
    • Russell & Norvig 2003, pp. 328–341,
    • Poole, Mackworth & Goebel 1998, pp. 281–298,
    • Nilsson 1998, chpt. 18.2
  44. Causal calculus:
    • Poole, Mackworth & Goebel 1998, pp. 335–337.
  45. Representing knowledge about knowledge: Belief calculus, modal logics:
    • Russell & Norvig 2003, pp. 341–344,
    • Poole, Mackworth & Goebel 1998, pp. 275–277
  46. 46.0 46.1 OWL Example with RDF Graph. Ontologies and Semantic Web.
  47. Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations". Journal of the American Medical Informatics Association. 13 (4): 369–371.
  48. MCGARRY, KEN (1 December 2005). "A survey of interestingness measures for knowledge discovery". The Knowledge Engineering Review. 20 (1): 39.
  49. Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic retrieval of video sequences using multimedia ontologies". MM ‘06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
  50. Sikos, Leslie F. (June 2017). Description Logics in Multimedia Reasoning. Cham: Springer.
  51. Dreyfus & Dreyfus 1986.
  52. Lufityanto, G., Donkin, C., & Pearson, J. (2016). Measuring intuition: nonconscious emotional information boosts decision accuracy and confidence. Psychological science, 27(5), 622-634.
  53. 53.0 53.1 Expert knowledge as embodied intuition:
    • Dreyfus & Dreyfus 1986 (Hubert Dreyfus is a philosopher and critic of AI who was among the first to argue that most useful human knowledge was encoded sub-symbolically. See Dreyfus' critique of AI)
    • Gladwell 2005 (Gladwell's Blink is a popular introduction to sub-symbolic reasoning and knowledge.)
    • Hawkins & Blakeslee 2005 (Hawkins argues that sub-symbolic knowledge should be the primary focus of AI research.)
  54. Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".(Turing 1950) In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".(Solomonoff 1956).
  55. This is a form of Tom Mitchell's widely quoted definition of machine learning: "A computer program is set to learn from an experience E with respect to some task T and some performance measure P if its performance on T as measured by P improves with experience E."
  56. 56.0 56.1 Learning:
    • ACM 1998, I.2.6,
    • Russell & Norvig 2003, pp. 649–788,
    • Poole, Mackworth & Goebel 1998, pp. 397–438,
    • Luger & Stubblefield 2004, pp. 385–542,
    • Nilsson 1998, chpt. 3.3, 10.3, 17.5, 20.
  57. "What is Unsupervised Learning? 互聯網檔案館歸檔,歸檔日期2018年9月30號,.". deepai.org.
  58. Dostál, P., & Pokorný, P. (2009). Cluster analysis and neural network. In 17th Annual Conference Proceedings on Technical Computing Prague (pp. 131-57).
  59. Planning:
    • ACM 1998, ~I.2.8,
    • Russell & Norvig 2003, pp. 375–459,
    • Poole, Mackworth & Goebel 1998, pp. 281–316,
    • Luger & Stubblefield 2004, pp. 314–329,
    • Nilsson 1998, chpt. 10.1–2, 22.
  60. Information value theory:
    • Russell & Norvig 2003, pp. 600–604.
  61. Multi-agent planning and emergent behavior:
    • Russell & Norvig 2003, pp. 449–455
  62. Classical planning:
    • Russell & Norvig 2003, pp. 375–430,
    • Poole, Mackworth & Goebel 1998, pp. 281–315,
    • Luger & Stubblefield 2004, pp. 314–329,
    • Nilsson 1998, chpt. 10.1–2, 22.
  63. Planning and acting in non-deterministic domains: conditional planning, execution monitoring, replanning and continuous planning:
    • Russell & Norvig 2003, pp. 430–449.
  64. Miller, W. T., Werbos, P. J., & Sutton, R. S. (Eds.). (1995). Neural networks for control. MIT press.
  65. Natural language processing:
    • ACM 1998, I.2.7
    • Russell & Norvig 2003, pp. 790–831
    • Poole, Mackworth & Goebel 1998, pp. 91–104
    • Luger & Stubblefield 2004, pp. 591–632.
  66. "Versatile question answering systems: seeing in synthesis" Archived 1 February 2016 at the Wayback Machine., Mittal et al., IJIIDS, 5(2), 119–142, 2011.
  67. Applications of natural language processing, including information retrieval (i.e. text mining) and machine translation:
    • Russell & Norvig 2003, pp. 840–857,
    • Luger & Stubblefield 2004, pp. 623–630.
  68. U.S. Patent 6618697, Method for rule-based correction of spelling and grammar errors.
  69. Implementing spelling correction. Stanford NLP.
  70. Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]". IEEE Computational Intelligence Magazine. 9 (2): 48–57.
  71. Machine perception:
    • Russell & Norvig 2003, pp. 537–581, 863–898
    • Nilsson 1998, ~chpt. 6.
  72. Computer vision:
    • ACM 1998, I.2.10
    • Russell & Norvig 2003, pp. 863–898.
    • Nilsson 1998, chpt. 6.
  73. Speech recognition:
    • ACM 1998, ~I.2.7
    • Russell & Norvig 2003, pp. 568–578.
  74. Object recognition:
    • Russell & Norvig 2003, pp. 885–892.
  75. Robotics: ACM 1998, I.2.9, Russell & Norvig 2003, pp. 901–942, Poole, Mackworth & Goebel 1998, pp. 443–460.
  76. Moving and configuration space: Russell & Norvig 2003, pp. 916–932.
  77. Emotion and affective computing:
    • Minsky 2006.
  78. Scassellati, Brian (2002). "Theory of mind for a humanoid robot". Autonomous Robots. 12 (1): 13–24.
  79. Cao, Yongcan; Yu, Wenwu; Ren, Wei; Chen, Guanrong (February 2013). "An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination". IEEE Transactions on Industrial Informatics. 9 (1): 427–438.
  80. Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125.
  81. Pennachin, C.; Goertzel, B. (2007). "Contemporary Approaches to Artificial General Intelligence". Artificial General Intelligence. Cognitive Technologies.Berlin, Heidelberg: Springer.
  82. Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence 互聯網檔案館歸檔,歸檔日期2018年8月19號,.". Distillations. Vol. 2 no. 2. pp. 14–23. Retrieved 20 March 2018.
  83. 83.0 83.1 Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel; Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski, Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen; Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (26 February 2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533.
  84. Sample, Ian (14 March 2017). "Google's DeepMind makes AI program that can learn like a human". the Guardian. Retrieved 26 April 2018.
  85. Goertzel, Ben; Lian, Ruiting; Arel, Itamar; de Garis, Hugo; Chen, Shuo (December 2010). "A world survey of artificial brain projects, Part II: Biologically inspired cognitive architectures". Neurocomputing. 74 (1–3): 30–49.
  86. "Cultivating Common Sense | DiscoverMagazine.com". Discover Magazine.
  87. Davis, Ernest; Marcus, Gary (24 August 2015). "Commonsense reasoning and commonsense knowledge in artificial intelligence". Communications of the ACM. 58 (9): 92–103.
  88. Winograd, Terry (January 1972). "Understanding natural language". Cognitive Psychology. 3 (1): 1–191.
  89. It’s Really Hard to Give AI “Common Sense”. Futurism.
  90. "The superhero of artificial intelligence: can this genius keep it in check?". the Guardian. 16 February 2016. Retrieved 26 April 2018.
  91. Nils Nilsson writes: "Simply put, there is wide disagreement in the field about what AI is all about" (Nilsson 1983, p. 10).
  92. AI's immediate precursors:
    • McCorduck 2004, pp. 51–107.
    • Crevier 1993, pp. 27–32.
    • Russell & Norvig 2003, pp. 15, 940.
    • Moravec 1988, p. 3.
  93. Haugeland, John (1985), Artificial Intelligence: The Very Idea, Cambridge, Mass: MIT Press.
  94. 94.0 94.1 Understanding the difference between Symbolic AI & Non Symbolic AI 互聯網檔案館歸檔,歸檔日期2018年12月5號,.. Analytics India.
  95. 95.0 95.1 95.2 95.3 Flasiński, M. (2016). Symbolic Artificial Intelligence. In Introduction to Artificial Intelligence (pp. 15-22). Springer, Cham.
  96. Haugeland 1985, pp. 112–117.
  97. The most dramatic case of sub-symbolic AI being pushed into the background was the devastating critique of perceptrons by Marvin Minsky and Seymour Papert in 1969. See History of AI, AI winter, or Frank Rosenblatt.
  98. What are heuristics? 互聯網檔案館歸檔,歸檔日期2019年8月20號,.. Conceptually.
  99. McCarthy and AI research at SAIL and SRI International:
    • McCorduck 2004, pp. 251–259.
    • Crevier 1993.
  100. AI research at Edinburgh and in France, birth of Prolog:
    • Crevier 1993, pp. 193–196.
    • Howe 1994.
  101. Knowledge revolution:
    • McCorduck 2004, pp. 266–276, 298–300, 314, 421.
    • Russell & Norvig 2003, pp. 22–23.
  102. Giarratano, J. C., & Riley, G. (1989). Expert systems: principles and programming. Brooks/Cole Publishing Co..
  103. Kendal, S.L.; Creen, M. (2007), An introduction to knowledge engineering, London: Springer.
  104. Domingos 2015, chapter 6.
  105. Revival of connectionism:
    • Crevier 1993, pp. 214–215.
    • Russell & Norvig 2003, p. 25.
  106. Computational intelligence
    • IEEE Computational Intelligence Society Archived 9 May 2008 at the Wayback Machine.
  107. Domingos 2015, Chapter 4.
  108. "Why Deep Learning Is Suddenly Changing Your Life". Fortune. 2016.
  109. "Google leads in the race to dominate artificial intelligence". The Economist.
  110. Seppo Linnainmaa (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6–7.
  111. Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389–400.
  112. Paul Werbos, "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences", PhD thesis, Harvard University, 1974.
  113. Paul Werbos (1982). Applications of advances in nonlinear sensitivity analysis. In System modeling and optimization (pp. 762–770). Springer Berlin Heidelberg.
  114. Backpropagation:
    • Russell & Norvig 2003, pp. 744–748,
    • Luger & Stubblefield 2004, pp. 467–474,
    • Nilsson 1998, chpt. 3.3.
  115. Stochastic methods for uncertain reasoning:
    • ACM 1998, ~I.2.3,
    • Russell & Norvig 2003, pp. 462–644,
    • Poole, Mackworth & Goebel 1998, pp. 345–395,
    • Luger & Stubblefield 2004, pp. 165–191, 333–381,
    • Nilsson 1998, chpt. 19.
  116. 116.0 116.1 Bayesian Networks 互聯網檔案館歸檔,歸檔日期2019年7月9號,.. Bayesialab.
  117. Bayesian networks:
    • Russell & Norvig 2003, pp. 492–523,
    • Poole, Mackworth & Goebel 1998, pp. 361–381,
    • Luger & Stubblefield 2004, pp. ~182–190, ≈363–379,
    • Nilsson 1998, chpt. 19.3–4.
  118. Bayesian inference algorithm:
    • Russell & Norvig 2003, pp. 504–519,
    • Poole, Mackworth & Goebel 1998, pp. 361–381,
    • Luger & Stubblefield 2004, pp. ~363–379,
    • Nilsson 1998, chpt. 19.4 & 7.
  119. Bayesian learning and the expectation-maximization algorithm:
    • Russell & Norvig 2003, pp. 712–724,
    • Poole, Mackworth & Goebel 1998, pp. 424–433,
    • Nilsson 1998, chpt. 20.
  120. Bayesian decision theory and Bayesian decision networks:
    • Russell & Norvig 2003, pp. 597–600.
  121. 121.0 121.1 Delalleau, O., Contal, E., Thibodeau-Laufer, E., Ferrari, R. C., Bengio, Y., & Zhang, F. (2012). Beyond skill rating: Advanced matchmaking in ghost recon online. IEEE Transactions on Computational Intelligence and AI in Games, 4(3), 167-177.
  122. 122.0 122.1 "Artificial intelligence can 'evolve' to solve problems". Science | AAAS.
  123. Welcoming the Era of Deep Neuroevolution 互聯網檔案館歸檔,歸檔日期2018年11月17號,.. Uber Engineering.
  124. Russell & Norvig 2009, p. 1.
  125. N. Aletras; D. Tsarapatsanis; D. Preotiuc-Pietro; V. Lampos (2016). "Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective". PeerJ Computer Science.
  126. Russell & Norvig 2009, p. 1.
  127. "The Economist Explains: Why firms are piling into artificial intelligence". The Economist. 31 March 2016.
  128. Lohr, Steve (28 February 2016). "The Promise of Artificial Intelligence Unfolds in Small Steps". The New York Times.
  129. "33 Corporations Working On Autonomous Vehicles". CB Insights. N.p., 11 August 2016.
  130. West, Darrell M. "Moving forward: Self-driving vehicles in China, Europe, Japan, Korea, and the United States". Center for Technology Innovation at Brookings. N.p., September 2016. 12 November 2016.
  131. McFarland, Matt. "Google's artificial intelligence breakthrough may have a huge impact on self-driving cars and much more". The Washington Post 25 February 2015. Infotrac Newsstand. 24 October 2016.
  132. ArXiv, E. T. (26 October 2015). Why Self-Driving Cars Must Be Programmed to Kill[失咗效嘅鏈]. Retrieved 17 November 2017.
  133. "10 Promising AI Applications in Health Care 互聯網檔案館歸檔,歸檔日期2018年12月15號,.". Harvard Business Review. 2018-05-10.
  134. Dina Bass (20 September 2016). "Microsoft Develops AI to Help Cancer Doctors Find the Right Treatments". Bloomberg.
  135. 135.0 135.1 Gallagher, James (26 January 2017). "Artificial intelligence 'as good as cancer doctors'". BBC News.
  136. Langen, Pauline A.; Katz, Jeffrey S.; Dempsey, Gayle, eds. (18 October 1994), Remote monitoring of high-risk patients using artificial intelligence (US5357427 A).
  137. 137.0 137.1 137.2 Neural Style Transfer: Creating Art with Deep Learning using tf.keras and eager execution 互聯網檔案館歸檔,歸檔日期2019年1月6號,.. Medium.
  138. "CTO Corner: Artificial Intelligence Use in Financial Services – Financial Services Roundtable[失咗效嘅鏈]". Financial Services Roundtable. 2 April 2015.
  139. Matz, S. C., et al. "Psychological targeting as an effective approach to digital mass persuasion." Proceedings of the National Academy of Sciences (2017): 201710966.
  140. A.I. Is a Crapshoot 互聯網檔案館歸檔,歸檔日期2018年12月15號,.. TV Tropes.
  141. Benevolent A.I.. TV Tropes.
  142. Buttazzo, G. (July 2001). "Artificial consciousness: Utopia or real possibility?". Computer (IEEE). 34 (7): 24–30.
  143. Galvan, Jill (1 January 1997). "Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"". Science Fiction Studies. 24 (3): 413–429.
  144. 144.0 144.1 A.I. Artificial Intelligence. Great Movie.
  145. 145.0 145.1 Anderson, Susan Leigh. "Asimov's "three laws of robotics" and machine metaethics." AI & Society 22.4 (2008): 477–493.
  146. Three Laws Compliant. TV Tropes.
  147. McCauley, Lee (2007). "AI armageddon and the three laws of robotics". Ethics and Information Technology. 9 (2): 153–164.