Although the digital age is here, worldwide web has been largely unrestricted since its creation in the late 1980s. With rapid advancements in emerging technologies, consumer data has become increasingly commodified by private companies at various extremes. U.S. data privacy standards have since remained static and subject to periodic shifts in public opinion, with myriad implications for information security, user satisfaction, and various modes of analysis.
Nowadays, nearly all aspects of life are digital and quantified; everything from cell phones to smart appliances track and keep tabs on our daily habits. The broad accessibility of social media and communication technologies has increased the volume, variety, and velocity by which these data points can be exploited. Private companies may utilize artificial intelligence and targeted algorithms to garner valuable insight from consumers’ interests and behavioral patterns. Many Americans, however, have struggled to keep pace with this changing digital culture. A recent study by the Pew Research Center suggests that a majority of Americans believe that it is not “possible to go through daily life without having data collected about them.” Even as private sector procurements are normalized and circulated on the public market, policymaking circles have lagged behind in adapting outdated privacy regulations to next-generation technology. With nearly countless use cases, it is now vital to conceptualize the risks and benefits of data privacy for industry best practices, as the traditional lines between publicly available information and unethical sourcing become increasingly blurred.
Free Choice and Informed Consent in the Digital Landscape
Much of the data privacy debate centers around two key factors: free choice and informed consent. In navigating the digital landscape, consumers must weigh potential trade-offs between their desired degree of privacy, convenience, and consumption of digital services. The majority of online retailers, entertainment applications, and social media outlets, by nature, collect and solicit data on user behavior. These online interactions, clicks, and cookies amount to a holistic profile where the consumer is the product. Companies utilize machine learning to develop connections and inform predictions at scale over how certain tendencies are attributed to user preferences and purchasing behaviors. These targeted algorithms may then be used to produce tailored advertisements that simultaneously collect information and gauge public interest in pre-generated feedback loops. From this view, there is an inherent risk of compromising personal privacy in subscribing to online content, as competing firms compete for consumer attention. Users must decide what they value most in the digital landscape, be it privacy or convenience. They should exercise due diligence in each case, whether in reviewing posted terms and conditions or opting out of online services altogether.
Implications of Targeted Algorithms for Social Media Privacy
On the other hand, communication technologies and targeted digital marketing may also be misused in various capacities. This is especially true for popular social media sites such as Facebook, Instagram, and Twitter, where the nexus of powerful feed algorithms and clickbait may push users toward entertaining, albeit questionable, content. Individuals on both sides of the political spectrum have fallen victim to social media misinformation in past years. These campaigns tend to be generated by political figures or internet trolls, who post intentionally inflammatory rhetoric to either inflame or appeal to the intense emotions of a given audience. In most cases, the viewership and allure of online misinformation are sustained by those seeking entertainment and validation for pre-existing biases. With repeat sharing and association, user feed algorithms curate content toward these beliefs, independent of whether such narratives are valid or laden with false information. In this sense, the more information that private companies collect, the more they will be able to push users toward catered content through click-based advertising.
More recently, organized crime groups have utilized Publicly Available Information (PAI) to produce AI-generated deep fakes and conduct illicit activities. Given sufficient requisite data and image processing capabilities, such groups have utilized Instagram and other social media sites to blackmail users by releasing falsified videos and sensitive personal information. Deep fakes may be used to depict individuals inappropriately and are more often indistinguishable from the legitimate media content, despite their alterations. These computer-generated images pose numerous risks for individuals who may be falsely displayed in pornographic content or political figures who may be misrepresented in illegitimate public statements. Deep fakes have set a troubling precedent for digital privacy and the integrity of open-source data channels, given the potential for self-interested actors to exploit loopholes in industry privacy standards and misconstrue consumer information for personal gain.
Last-Generation Regulations for Next-Generation Technology
The United States does not currently have a comprehensive set of data privacy standards. Rather, the U.S. code contains a mix of specific issue-based laws pertaining to areas like health, education, and digital communications. Although these laws may be effective at protecting certain types of information, such de facto regulations may be insufficient in addressing the full scope of the above privacy issues. In reality, policymakers are caught in a difficult position when trying to conceptualize and regulate emerging technologies. They must either pose blanket privacy regulations at the risk of impeding innovation and creative data applications, or potentially allow malpractice through unchecked data solicitation.
While overarching privacy regulations may be out-of-touch with current technology, independent organizations have successfully combated misinformation, outside of federal policy frameworks. For example, Meta Platforms Inc. and the Centers for Disease Control and Prevention recently undertook a public technology literacy campaign to inform social media users about the importance of source characterization and the risks of data misrepresentation. The campaign focuses largely on Facebook and Instagram to flag all posts about COVID-19, directing users to peer-reviewed and laymen-oriented information on best public health practices. Most importantly, Meta and CDC’s partnership empowered consumers to feel safe about the security and accuracy of the content they consume. This approach highlights the importance of balancing open user access to online services while maintaining avenues for platform optimization.
From an open-source intelligence point of view, social media can provide an excellent tool for conducting online investigations and identifying trends in public sentiment. However, these outlets must be used with caution and only when appropriate because of the potential to violate citizens’ civil liberties by blurring the lines between closed- and open-source information. Amid major shifts in digital privacy regulations, Praescient Analytics has maintained consistent data-handling standards, in line with the U.S. government. Since the company’s inception in 2011, Praescient’s analysts have endeavored to keep private citizens’ data safe by operating within the bounds of public legislation and pioneering industry best practices. In the coming years, Praescient will continue to maximize the utility of OSINT services to turn PAI into actionable insights without compromising the company’s integrity or core competencies.