Apple privacy

Apple Siri Issue Confirms Privacy Challenges Come With Enhancing AI

In AI by Daniel NewmanLeave a Comment

Apple privacy

Apple privacy is being called into question as a whistle blower has reported that Apple contractors regularly hear confidential medical information, drug deals, and recordings of couples having sex, as part of their job providing quality control, or “grading”, the company’s Siri voice assistant, according to numerous reports that came out late this week including the Guardian, Verge, CNBC and more.

While Apple does not explicitly disclose it in its consumer-facing privacy documentation, a small proportion of Siri recordings are passed on to contractors working for the company around the world. They are tasked with grading the responses on a variety of factors, including whether the activation of the voice assistant was deliberate or accidental, whether the query was something Siri could be expected to help with and whether Siri’s response was appropriate.

Apple says the data “is used to help Siri and dictation … understand you better and recognize what you say.”

Read more about the Apple Privacy (Siri) story on The Guardian:

Analyst Take: It is well understood in the tech community that there is a symbiotic relationship between human and machine in order to make AI something that people can benefit from on a daily basis. Most people are so pleased with the experience of Siri, Alexa, Cortana or other AI based chat assistants, that they tend not to think much about what is happening to the exchanges they have with their devices. However, as the technology is rapidly proliferating, we are hearing more regularly about the privacy violations, or at the very least infringements that come with these AI enhanced experiences.

At WWDC, Apple made a big statement about privacy. I was one, that was somewhat inspired by the announcements, as I do aspire to see the tech industry to more to put the privacy controls back into the hands of the consumer. While I do not expect Big Tech to stop its rapid collection and utilization of data to build products, services and experiences for its users, I remain hopeful that we can separate our most intimate details and data from the ears of big tech; and the contractors they use to rate and grade the conversations.

Personally, I don’t take much issue with the capturing, listening and use of the content where we activate the personal assistant. Usually there is some level of agreement to this in our terms of service. What has been so eyeopening has been the consistent whistle blowing on what appear to be more explicit breaches on our privacy through passive listening or the awakening of devices that have not been activated. To the most unaware user, the continued breach of privacy should be an awakening. To the more conscious technophile, it should be straight up alarming. It has also been well documented by security and privacy researchers that the so called “Benign Nature” of anonymous data isn’t so anonymous as claims have it that the data can easily (for big tech) be linked back to specific users.

Bottom line, if we want AI to work flawlessly and as humanistically as possible, it is going to take human ears to improve the systems. Apple, Amazon, Google, Microsoft and all the big players building AI based tools are doing this at some level. The rub for most consumers that are paying any attention to it is the lack of transparency from some of the companies, which leaves the consumers vulnerable to having conversations heard that shouldn’t be or at the very least aren’t expected to be. It will be interesting to see if the Justice Department probe takes issues with any of these ongoing stories against Big Tech where data is being captured in situations that have seemingly been unprovoked.

To some extent, I don’t see why Big Tech doesn’t just come out with clear terms to their AI training activities and the utilization of contractors to improve the experience. So few people pay attention and read the details that this type of clarity would make their behaviors seem less malicious and I’m somewhat confident most people would not change their behavior whatsoever. The only thing that will fix these behaviors is real regulation; even large FTC fines aren’t going to stop the behavior if the profit win fall is too great. Just ask Facebook

Meanwhile, this type of listening and behavior is the only way right now to make AI work the way most people seem to want it to. So until AI ethics are taken more seriously, this is a sensational story that will lead to absolutely no change for consumers or their behavior. 

More Analysis from Futurum Research:

Google Pops On Earnings, Provides Big Update On Cloud Performance

DOJ Finally Approves Sprint T-Mobile Merger

Amazon Web Services: Stay The Course, 37% Growth Is Absolutely Fine

 

The original version of this article was first published on Futurum Research.

Daniel Newman is the Principal Analyst of Futurum Research and the CEO of Broadsuite Media Group. Living his life at the intersection of people and technology, Daniel works with the world’s largest technology brands exploring Digital Transformation and how it is influencing the enterprise. From Big Data to IoT to Cloud Computing, Newman makes the connections between business, people and tech that are required for companies to benefit most from their technology projects, which leads to his ideas regularly being cited in CIO.Com, CIO Review and hundreds of other sites across the world. A 5x Best Selling Author including his most recent “Building Dragons: Digital Transformation in the Experience Economy,” Daniel is also a Forbes, Entrepreneur and Huffington Post Contributor. MBA and Graduate Adjunct Professor, Daniel Newman is a Chicago Native and his speaking takes him around the world each year as he shares his vision of the role technology will play in our future.

Leave a Comment