用人类智慧应对人工智能挑战_完美体育365
- 发表时间:2024-10-10
- 来源:
- 人气:
A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a “fascist’s dream” if the technology were misused by authoritarian regimes.关于人工智能的变革威力,人们明确提出了很多大胆的设想。但我们也有适当讲出一些相当严重警告。
上月,微软公司研究院(Microsoft Research)首席研究员凯特?克劳福德(Kate Crawford)警告称之为,如果被威权政府欺诈,威力与日俱增的人工智能可能会引致一场“法西斯梦”。“Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” Ms Crawford told the SXSW tech conference.克劳福德在SXSW科技大会上回应:“就在我们看见人工智能的发展速度呈圆形阶梯型上升时,其他一些事情也在再次发生:极端民族主义、右翼威权主义和法西斯主义兴起。”The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said.她回应,人工智能有可能带给可观的数据登记册、针对特定人口群体、欺诈预测型警务以及操控政治信仰。
Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways. Sir Mark Walport, the British government’s chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology.克劳福德并不是唯一对强劲的新技术被错误用于(有时以意想不到的方式)深感忧虑的人。英国政府首席科学顾问马克?沃尔波特(Mark Walport)警告称之为,在医学和法律等牵涉到细致人类辨别的领域不假思索地用于人工智能,有可能带给破坏性结果,并风化公众对这项技术的信任。Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. “Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment — and externalise these through their algorithms,” he wrote in an article in Wired.尽管人工智能有强化人类辨别的潜力,但它也有可能带给危害的种族主义,并产生一种错误的客观感觉。
他在《连线》(Wired)杂志的一篇文章中写到:“机器学习可能会内部化在量刑或医疗历史中不存在的所有隐性种族主义,并通过它们的算法外部化。”As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.就像仍然以来的情况那样,辨识危险性依然要比消弭危险性更容易得多。没底线的政权总有一天会遵从容许人工智能用于的规定。
然而,即便在长时间运转的基于法律的民主国家,框定必要的对此也很棘手。将人工智能可以作出的大力贡献最大化,同时将其危害后果降到低于,将是我们这个时代最艰难的公共政策挑战之一。For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted.首先,人工智能技术很难解读,其用途往往具有神秘色彩。寻找仍未被行业挤到、且不不存在其他利益冲突的独立国家专家也显得更加无以。
Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI. Much cutting-edge research is therefore in the private rather than public domain.受到该领域类似于商业军备竞赛的竞争的推展,大型科技公司仍然在争夺战人工智能领域很多最杰出的学术专家。因此,很多领先研究坐落于私营部门,而非公共部门。To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.有一点认同的是,一些领先科技公司认识到了半透明的必要性,尽管有些姗姗来迟。
还有一连串倡议希望对人工智能进行更加多政策研究和公开发表辩论。Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI.特斯拉汽车(Tesla Motors)创始人埃隆?马斯克(Elon Musk)协助创立了非盈利研究机构OpenAI,致力于以安全性方式研发人工智能。
Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.亚马逊(Amazon)、Facebook、谷歌(Google) DeepMind、IBM、微软公司(Microsoft)和苹果(Apple)也牵头发动Partnership on AI,以启动更加多有关该技术实际应用于的公开发表辩论。Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company’s activities.谷歌DeepMind牵头创始人、Partnership on AI牵头主席穆斯塔法?苏莱曼(Mustafa Suleyman)回应,人工智能可以在应付我们这个时代一些仅次于挑战方面充分发挥变革性起到。但他指出,人工智能的发展速度多达我们解读和掌控这些系统的集体能力。
因此,领先人工智能公司必需在对自己问责方面充分发挥极具创意和更加主动的起到。为此,这家总部坐落于伦敦的公司正在尝试可验证的数据审核,并将迅速宣告一个道德委员会的包含,该委员会将审查该公司的所有活动。But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. “We have to be able to control these systems so they do what we want when we want and they don’t run ahead of us,” he says in an interview for the FT Tech Tonic podcast.但苏莱曼认为,我们的社会还必需设计更佳的框架,指导这些技术为集体利益服务。
他在拒绝接受英国《金融时报》Tech Tonic播客的专访时回应:“我们必需需要掌控这些系统,使他们在我们期望的时间做到我们想要做到的事,而会自说自话。”Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are “explainable” to the public. That sounds simple in principle, but may prove fiendishly complex in practice.一些仔细观察人士回应,做这点的最佳方法是调整我们的法律制度,保证人工智能系统可以向公众“说明”。从应以说道,这听得上去很非常简单,但实际做到一起有可能十分简单。Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.布鲁塞尔权利大学(Free University of Brussels)法律和科技学教授米雷列?希尔德布兰特(Mireille Hildebrandt)回应,人工智能的危险性之一是我们显得过度倚赖我们并不几乎解读的“不必脑子的智慧”。
她指出,这些算法的目的和影响必需是可测试而且在法庭上是可争辩的。她回应:“如果你无法有意义地说明你的系统的要求,那么你就无法生产它们。”We are going to need a lot more human intelligence to address the challenges of AI.我们将必须更好的人类智慧来应付人工智能挑战。
本文关键词:完美体育365
本文来源:完美体育365-www.redsquirrelempire.com
- 2024-12-22完美体育365:库克单节突破惊人,再一次找到了手感
- 2024-12-22完美体育365_KD首次当选了全明星MVP 杜兰特竖起球迷心中的榜样
- 2024-12-22新年换新颜,用新衣诉说我们的故事【完美体育365】
- 2024-12-22完美体育365:虎牙重制版邀请赛6点开战 TeD钦定的男人会放嘲讽吗?
- 2024-12-22虎牙天命杯总决赛落幕,Snake-TC勇夺冠军‘完美体育365’
- 2024-12-22完美体育365|新加坡赛次日预测:EG异军突起 LGD难取胜
- 2024-12-22眼皮一单一双该怎么办|完美体育365
- 2024-12-22小秘诀:冰卷心菜叶贴胸可缓解乳房胀痛|完美体育365
- 2024-12-22取消“方便门诊”须有替代措施‘完美体育365’
- 2024-12-19完美体育365_爆森林狼与快船达成一致,锡伯杜希望巴特勒归队
- 2024-12-19完美体育365:火箭赛季总结:建灯泡组合 称霸惯例赛憾负西决
- 2024-12-19卡佩拉大帽杜兰特提士气 库里遭虐一哥眼神忧郁:完美体育365
- 2024-12-19“完美体育365”营销风向标,2017梅花网传播业大展北京站圆满落幕!
- 2024-12-19完美体育365_联盟情报局:未来战士锤石新皮肤即将上线!致命的一勾
- 2024-12-19完美体育365:T1大胜SB!老板Faker心情不错做了个手势
- 2024-12-19完美体育365-LOL宇宙更新 下一位新英雄是艾欧尼亚射手 网友:终于不是AD的爹了
- 产品中心标题一
用于生产保险粉,磺胺二甲基嘧啶安乃近,己内酰胺等以及氯仿,苯丙砜和苯甲醛的净化。照相工业用作定影剂的配料。香料工业用于生产香草醛。用作酿造工业防腐剂,橡胶凝固剂和
- 产品中心标题二
用于生产保险粉,磺胺二甲基嘧啶安乃近,己内酰胺等以及氯仿,苯丙砜和苯甲醛的净化。照相工业用作定影剂的配料。香料工业用于生产香草醛。用作酿造工业防腐剂,橡胶凝固剂和
- 产品中心标题九
岗亭,英文名字为Watch House,字面理解就是岗哨工作的小房子。在车场管理中,岗亭常常也称之为收费亭,是停车场管理人员收取停车费的工作场所,除此以外还可用作小区保安门卫值
- 产品中心标题八
岗亭,英文名字为Watch House,字面理解就是岗哨工作的小房子。在车场管理中,岗亭常常也称之为收费亭,是停车场管理人员收取停车费的工作场所,除此以外还可用作小区保安门卫值
- 0完美体育365-联盟情报局:LOL全球十周年庆典 免费限定皮肤大放送!
- 1“完美体育365”视频惊心!广西男子酒驾冲卡撞倒辅警,逃逸后竟趴在这躲避追捕
- 2打造视频营销新一极,西瓜视频将与抖音深度联动打造微综艺:完美体育365
- 3完美体育365-后羿嫦娥限定爆料!辅助梯度揭秘,瑶或成最大赢家!
- 4大帝与西蒙斯里应外合 最终76人大胜尼克斯取得连胜|完美体育365
- 5【完美体育365】接连作案!打包还原空首饰盒,这个“心细小偷”被抓了
- 6警惕自媒体“虚假繁荣”‘完美体育365’
- 7【完美体育365】不可思议!曾经王者马努-吉诺比利宣布退役
- 8正确饮水 女白领轻松排毒【完美体育365】
- 9完美体育365|放飞奇思妙想 谷歌科学挑战赛正在进行