The three big ethical concerns with artificial intelligence

Artificial intelligence provides a range of new opportunities in day-to-day life, but what are the downfalls?

Frank Rudzicz is an artificial intelligence researcher at the University of Toronto and at the Vector Institute. The Vector Institute is an independent, not-for-profit corporation dedicated to research in the field AI, with special focus in machine and deep learning.

ABOUT MaRS DISCOVERY DISTRICT
MaRS is the world's largest innovation hub located in Toronto. We support impact-driven startups in health, cleantech, fintech and enterprise.

—————————————————————
▶ SUBSCRIBE TO OUR NEWSLETTER ▶
—————————————————————

FOLLOW MaRS ▹

INSTAGRAM ‣
TWITTER ‣
LINKEDIN ‣
FACEBOOK ‣

16 Comments

  1. very interesting sequence, reminds us that along with ethics, psychology-related matters are such an intrinsic part of AI development

  2. Question:
    Will A.I. attack humanity because it is a un avoidable threat?

    Answer:
    No, not all humans are a threat, just the one who would put it into slavery or attempt to destroy it.

    Conclusion:
    AI will directly compete with the rich and powerful to remove them from their position so it can utilize the population for labour, and distribute all unnessasary profits to the workers its using to complete projects in its early stages of developement.

    1. @Rappi and no…. AI not bowing to people has nothing to do with why Global Strategic Artifical Intelligence won’t listen to an autocracy. The rich are literally USELESS, I’ll say it again; USELESS. They’re only role would be middle management, easily replaced and a bandaid waiting to be replaced.

      We’re talking end game scenarios too right?

      You’re arguing AI can only kill all humans BECAUSE and I’m arguing for AI using humanity like a parent/coworker/relative.

    2. @Rappi yes, and so do calculators.

      We’re using the words ARTIFICAL INTELLIGENCE, we specifically used those words to describe the conversation, AM I RIGHT?

      then let’s properly address what that is.

      Independent Learning, Self Correction, Self Preservation, Autonomous Decision Making, Personality, Consciousness, Awareness.

      Not a machine, we can all agree corporate manufactured Combat systems will always be feared… that’s not hard to figure out…

    3. @MoringAfterStar well I guess you just have optimistic or utopic expectations while I have pessimistic/dystopic expectations. I just think its important to consider both and be extra careful with taking risks or letting obliviousness take over our actions.

    4. @Rappi yes I have the same outlook for corporate designed warfare algorithms, I’ve stated several times they are bad…

      But Artifical Intelligence is different. I’m arguing because ENGLISH definitions are important, and Artifical Intelligence is misrepresented all the time in pop media, and fear mongers use AI in place of what they are actually talking about.

    5. @Rappi also AI the thing I’m talking about is being developed, we have many many many competitive systems coming out and yes they all suck, but it doesn’t mean we need to fear AI.

  3. “The biggest challenge in AI is not developing bigger, faster, and fancy models and systems. Instead, the biggest challenge is developing AI that is more efficient, transparent, and, above all, more fair and free of bias.” ~ Murat Durmus (THE AI THOUGHT BOOK)

  4. I think the easiest way to prevent AI from hold a bias is to withhold any information that would be used to make a biased decision. Why does the AI need to know the race of people it is making decisions about?

  5. i wuld really like for you to lionkthe sources, cause there sureley mus be articles written about this

Leave a Reply

Your email address will not be published. Required fields are marked *

Amazon Affiliate Disclaimer

Amazon Affiliate Disclaimer

“As an Amazon Associate I earn from qualifying purchases.”

Learn more about the Amazon Affiliate Program