;

CORRECTION NOTICE: An earlier post (dated 12 Dec 2024, that has since been deleted) communicated false statements of fact.

For the correct facts, Visit

By confusing policy with ethics, the framework fails to take on the big issues in artificial intelligence

This week, the global elite have gathered in Davos, Switzerland for the World Economic Forum. Idealists see it as a great opportunity for world leaders to meet in-person. Cynics (like myself) view it as an ostentatious gathering of an elite class who don’t particularly speak for most people.

That being said, Davos does consistently produce meaningful results (this year, it seems David Attenborough’s speech on the environment will steal the show, which is nice change of pace from the years dominated by Donald Trump or Xi Jinping).

For Singapore, Minister S Iswaran also used the summit to introduce a new tech initiative aimed at creating ethical guidance the for artificial intelligence industry.

It is called the Model Artificial Intelligence (AI) Governance Framework and is meant to help steer the industry towards positive development.

Overall, the framework is fine. Frankly, it is hard to get overly worked up about the initiative (yes, not a good statement in an opinion piece). But, because it confuses policy with ethics, the framework is essentially irrelevant.

Let me explain

The two guiding principals of the plan are as follows:

  1. Decisions made by or with the assistance of AI are explainable, transparent and fair to consumers
  2. Their AI solutions are human-centric.
See also  Meet the 10 startups showcasing at Korea Tech Day

These guiding principles are benign, and fall into the realm of platitudes. If they were backed with legal consequences (like GDRP) then it would make a difference. But they are not, and that is by design.

Now to the crux of the issue. The Infocomm Media and Development Authority (IMDA) used an example of a company targeting soft drinks towards certain consumers. In this hypothetical, the algorithm is telling the company to push sugary drinks towards a buyer.

Selling products is generally a low-harm use of AI because it is up to the buyer to go through with the purchase (and IMDA admits as such). However, the use case also suggests the algorithm should be tweaked because high sugar intake can lead to diabetes.

This confuses ethics with policy. In Singapore, there is a gigantic push to get people to consume less sugar because of the city-state’s high rate of diabetes. But it is not unethical to sell someone a Sprite and should not be viewed as such.

Furthermore, one of Singapore’s most famous use-cases for artificial intelligence (putting facial recognition software on lamp posts) would be considered an egregious ethical violation in many nations.

See also  KL-based e-hailing startup EaziCar raises US$73K via equity crowdfunding

Also Read: Howa taxi company launches app to challenge Grab and Go-Jek in Thailand

The big issue with artificial intelligence is that it is taught by humans, and thus follows the morality of its creator. There are certain issues we can all agree on (thou shalt not kill), and it is those ethics we need to drill into AI.

But once we start to confuse politics/policy with ethics, it creates a situation whereby the guidelines become largely ignored.

This gap in focus can be highlighted by Iswaren himself. At Davos, he was asked why large countries like Japan and the United States should take the AI framework seriously. He said,

“I think one of the questions is really around how – and this is again one of Singapore’s key value propositions – we are a small, open economy. We are pro-business. We are also keen to engender a rules-based, norms-based trading and economic environment globally. Therefore, when we propose some of these ideas, they tend to be seen in that context. It is more objective as opposed to some certain other jurisdictions that – maybe because of their size, or because of what is presumed to be their larger agenda or objective – the response from more neutral players can be different.”

Also Read: Retrenched and dejected, this entrepreneur proved that a lot can happen over coffee

See also  Indonesian Finance Minister Sri Mulyani: Amazon plans to enter the country

Iswaren is one hundred per cent correct. But it approaches artificial intelligence from a business-first logic. In this world, issues of free trade, politics and economics trump the other debates in the artificial intelligence field.

Singapore is a business-first country, so this makes sense. But the framework ignores large questions like, “who is responsible if AI kills someone?”, “How do we provide jobs to people displaced by AI?” and “How do we prevent self-fulfilling prophecies?

Because these questions, and other truly ethical dilemmas, are not actually addressed by the framework, most people will forget this exists in a few months time.

There is nothing inherently wrong with the AI initiative, and it is better than nothing, but it will make zero impact on the industry as a whole.

—

The post Singapore AI framework is a good start but will not make impact appeared first on e27.

Source: E27

Bye27