Search for Regulation: The Tangled Web of Digital Data Ethics

| Filed Under: Insights | By:

data regulation blog

Today, our technology landscape is dictated by data — with advertising technology being no exception. With this in mind, it’s no surprise that digital marketers and ad agencies across the globe cite data science and analysis as the number one skill they seek.

For years, data scientists were (and in many regards, continue to be) the modern day prospectors seeking gold in the form of actionable insights. And just as 49’ers ventured into uncharted territories to find riches with little to no laws standing in their way, technologists have been left to their own devices as well — free to collect, refine and apply valuable personal data with little to no regulation.

The questions now are, who will usher in these regulations and how will they alter the playing field of digital marketing?

The Modern Day Cache Grab

At the most basic level, advertisers and the platforms they use receive the majority of scrutiny and public outrage. 2018 saw Silicon Valley CEOs taking the stand before Congress and the Senate, defending the merits of their platforms as lawmakers scrambled to wrap their heads around this new age of connectivity. To this day, no meaningful national law exists to fully govern or protect the data of US citizens.

While the EU’s GDPR and California’s Consumer Privacy Act may be glimmers of light for those who demand digital discretion, many remain skeptical and with good reason.

To date, Google remains the only US company to be issued fines under GDPR despite the majority of domestic privacy professionals admitting they are still not compliant. Stateside, California’s privacy act is a year out from 2020 enforcement and will surely be met with the same levels of resistance and gaming.

On a national level, a senate subcommittee has promised a draft of the US’s own GDPR-equivalent legislation, but it’s hard to imagine that digital security will trump the current national obsession with border security. Furthermore, US legislation has proven to be opposed to increased data protection at the highest levels.

The Rising Tide Lifts Some Boats, Sinks Others

In 2017, protections that limited the ways in which internet service providers (ISPs like AT&T, Comcast and Time Warner) could collect, use and share or sell user data were repealed. At this time, ISPs are free to collect and sell subscribers’ personal information without consent while companies such as Google and Facebook find themselves on the receiving end of pending regulations.

These regulations seek to require users’ consent to the tracking and gathering of their private data. Meanwhile ISPs are simply required to provide a mechanism for users to opt out of these practices. The difference being: Ad platforms and analytics networks are being forced to gain consent prior to data collection rather than simply providing a way for the data collection to stop after the fact.

Ultimately, users can choose not to use services whose data policies don’t align with their sensibilities. Google users can switch to Duck Duck Go. Facebook users can manage their ad preferences. Web browsers can block display ads with a few clicks. But due to monopolization, limited competition and high switching costs, it is much harder for consumers to move ISPs.

According to a 2018 poll of US internet users, 81% of respondents were uncomfortable with “companies being able to sell data related to [them], such as an email address, for online advertising purposes.” One can assume how the same audience would feel about the sale of their phone numbers, paired with live location data.

Do Not Call” Does Not Work

In 2019, spam robocalls are inescapable. If it feels like there are more spam calls now than ever before, that’s because there are. Not only are they frequent, but they’re targeted, with many robocall companies “cloaking” the numbers that appear in a user’s caller ID, making them appear to be from local callers.

Powered by location data acquired and sold by cellular and broadband providers (and supported by historically inexpensive technology solutions), spam callers have actually benefitted from US legal precedents.

In 2015, the FCC passed an order to limit the use of autodialers, or machines that could automatically place calls to either a list of numbers or random numbers with no human assistance. In a 2018 court case, the DC Circuit Court essentially overturned this, taking issue with the order’s definition of an autodialer.

And with that, a legal precedent was set. Although telemarketers are required to abide by a short list of federal rules and regulations, grey areas exist, and the so-called “Do Not Call List” has been rendered largely impotent.

The Concerning Implications of Artificial Intelligence

As technology becomes more powerful and less expensive, the opportunities for exploitation increase. In 2018 at Google’s annual developers’ conference, Google demoed the Google Duplex A.I. assistant, which was built to “accomplish real world tasks over the phone.” The video recording of that demo spread across the internet like wildfire, displaying the natural conversational language style and seamless problem solving that the system offers.

The responses online were both impressed and concerned. In the video, callers on the other line had no clue they were not speaking to a human, raising a wave of criticism from privacy advocates.

With Duplex characterized by TechCrunch as “deceptive” and ultimately “a failure of user experience,” Google released a statement assuring consumers that the voice assistant would both announce itself as a robot and receive consent from the user on the other end of the line before recording the call.

While that is a step in the right direction, the fact that this technology is available at the consumer level is not comforting for many. Combining this technology with the current lack of regulation leaves room for exploitation on a massive scale.

A Call for Accountability in the Age of AI

In many cases, advertising is where the worlds of consumers and technology heavily intersect. For that reason, marketers often find themselves acting as moral compasses for brands, being forced to decide where segmentation crosses into profiling, automation into deception, and influence into manipulation. This intersection makes them accountable, sure, but only partially responsible.

All of these examples of tech and data overreach are examples of core digital marketing tenets gone too far. ISPs are obtaining, refining and modeling user data with the intent of pushing micro targeted advertising and advanced customer segmentation. Cloaked caller ID numbers masquerading as local callers are a poor execution of content/context personalization.

In all these examples, there is one through line that makes them truly insidious. It’s the removal of humanity from the equation.

At its core, we as marketers are in the business of building trust. Trust in our products, trust in our people, and trust in our processes. With the platforms available to advertisers, each with a multitude of targeting options, every customer becomes a data point.

Every experience is analyzed as an opportunity to optimize a conversion rate and every action and inaction has become a tracked and charted metric. While this is something that is core to what we do, responsibility is more important now than ever.

It Begins with Us

As the old adage says, “There’s no such thing as a free lunch.” The internet is no exception. Users must now weigh every single action taken against the likely cost of future unwanted emails or targeting. Consumers are wary, suspicious and less likely to respond to marketing efforts.

As marketers, we must combat this by going back to the basics of earning trust. As we move forward, it’s important for us to take these key steps to ensure we are good stewards of our customers’ information. Not only will this earn the loyalty of target audiences, but it will also strengthen our brands well into the future.

For any further questions, feel free to email us at [email protected]