Skip to main content

Legal context of monitoring digital discourse on social media platforms

Level required: Intermediate
AP
Alessandro Polidoro
attorney at law & digital rights advocate

In today's digital world, there is a growing demand for big online platforms to be more transparent and accountable. Researchers, journalists, human rights advocates, governments and users all want to know what these platforms are up to. Even businesses, content creators and public figures who rely on these platforms struggle to understand how they work because these entities are owned by big private companies that often keep their operations hidden and benefit from information asymmetry.

Such a problematic dynamic poses one serious question: how can we investigate social media platforms in a way that upholds ethical standards and, most importantly, does not clash with the existing legal framework in which they operate? Clearly, there are many aspects of these online platforms that require much more public scrutiny. Examples of this can be found in many forms, for instance: the algorithms they use for content personalisation (so called “recommender systems”), in their tracking and profiling approaches, in the systems for content moderation and the related practices they implement, in the ecosystem in which they market everybody's data and share it with data brokers, in the several techniques used for targeted advertisement, in all the systemic risks (to democratic societies) that exist on these platforms.

In this chapter, we will refer to some of the main legal and technological issues that are relevant for analysing social media platforms from a public good perspective aligned for researchers from civil society and academia.

Imagine a world where these platforms are more open about how they work, where the rules are clear and where everyone has a say in how they operate. Such a world could lead to a more informed and fair society. Learn more about the intricacies of social media platforms' monitoring and contribute to making the digital landscape is fairer and more transparent.

1. Useful legal concepts when analysing social media platforms#

Social media platforms are pervasive in our lives and deeply entangled with the exercise of some of our rights and many of the aspects of our public life. Yet, most of these platforms were not created with this in mind. On the one hand, we can use them to exercise our freedom of expression, our right of association or our freedom to conduct a business. On the other, we must not forget that by doing so we are implicitly trusting that their legitimate owners will be collaborative and supportive to everyone’s self-determination even though, legally speaking, they are not required to do so.

Problems connected to this circumstance can be found in contested practices such as “deplatforming” (1) users, which consists of removing and banning an individual or a group from a mass communication medium such as a social networking or blogging website. The case is similar for so called “shadow banning” (2), which is the practice of blocking or partially blocking a user or a user's content from some areas of an online community in such a way that the ban is not readily apparent to the user.

The decision on whether and how to enforce a ban from a social media platform can be taken unilaterally and with a lack of transparency because these spaces constitute private property, and it is up to the platform providers to decide who can access them and who cannot.

This very same logic applies when we look at the analysis of platforms for research purposes.

While law makers and regulators are still in the process of working towards a common set of rules for online platforms that would apply internationally, it is essential for you to be aware that different countries have their own approaches to regulating these platforms. This means that the way online platforms are managed can vary from one country to another. However, this variation should not overwhelm you. There are some fundamental principles that you can keep in mind when considering how these platforms are controlled.

It's important to recognize that the internet does not have strict, universal rules that every country follows. Instead, each nation has the authority to make its own regulations regarding online platforms. These rules can cover a wide range of topics, including how data is handled, what content is allowed and how businesses compete.

As a researcher of online platforms, it's a good idea to stay informed about the specific rules that apply in the country or countries where you are operating. This knowledge will help you navigate the digital landscape more effectively and responsibly. Additionally, understanding that while rules may differ, some core principles remain constant can serve as a compass for your online exploration.

What is the legal concept of a digital platform?

One of the key principles that underpin all legislative efforts in this field is the legal notion of “digital asset”, which is anything that exists only in digital form and comes with a distinct usage right or distinct permission for use. Digital assets encompass a wide range of items, such as software, photographs, logos, illustrations, animations, audiovisual content, presentations, spreadsheets, digital artwork, word documents, email messages, websites, and numerous other digital formats, along with their associated metadata.

From a legal standpoint, these kinds of assets represent an extension in the digital world of one's physical assets and virtual space being part of an individual's personal sphere that, as such, deserves legal protection.

Following this logic, online platforms are considered digital assets owned by private subjects to which a very ancient legal concept applies that since Latin time was known as ius excludendi alios, literally “the right to exclude others” (3). This concept refers to the fundamental property right that allows a property owner to exclude others from using or entering their property and to create a set of rules to be followed while staying inside of it. In essence, it is the right to control access to and use of one's property and is a key element of property ownership in many legal systems. It means that the property owner has the authority to determine who and under which conditions can access their property.

If you want to monitor and analyse social media platforms, notably without their consent, you need to be prepared that this could represent a violation of this “right to exclude others” and consequentially translate into civil or even criminal liability, if not both. Platform owners could decide to carry legal actions against people performing social media analysis or, at the very least, permanently ban them from the platform.

For the scope of this chapter, we will not delve into the ramifications of criminal liability, as matters of criminal law significantly vary from nation to nation within their specific legal systems. It is just worth specifying that often this form of legal responsibly involves the actual violation of security measures put in place on a given platform and, in most cases, the actual intentionality to temper with or damage the digital infrastructure (even if this is still highly debated among scholars).

What kind of civil liability can derive from platform analysis?

Another key principle that must be considered relates to the risk of civil liability, which is usually distinguished in contractual liability or tort liability. In brief, contractual liability arises from the breach of an agreement stipulated between parties, while tort liability is the responsibility to compensate for harm or injury caused to another person or their property through wrongful conduct outside of a contract.

In the field of social media platform analysis, you are most likely to be liable at a civil level if you breach the terms of services and/or user agreements, infringe on copyright, intellectual property, or trade secrets. In addition, you be held liable if you violate data protection law or the privacy of other users.

Examples of tort liability can cover a wide range of civil wrongs, such as the disruption of the service provided by a platform or the violation of its trade secrets. This can result in compensation or remedies for the damaged party.

The interesting part, however, is that there is much to learn about the ethics and the behaviour of big platforms also from the way they position themselves inside this vast and often convoluted legal context. In a certain sense, analysing their legal practices can be already revealing of the way they run their business.

The Digital Services Act allows so-called “vetted researchers” (4) to file for data access to conduct independent audits, which significantly increases the possibilities for public scrutiny of the digital activities on social media platforms. This research access is promising and important though the specifics of this procedure are yet to be decided. It will be interesting to see how platforms comply with and implement these new rules.

1.1 Terms of Services and User Agreements: What do they mean for the Analysis of Social Media Platforms?#

Every user who navigates the internet, whether using social media platforms, e-commerce marketplaces, streaming services, and others, will at some point be confronted with the famous “wall of text” of the online world: the Terms of Services (ToS).

These ToS govern our relationship with platforms, delineate our rights and impose restrictions that bear profound legal implications. They represent a significant portion of the conditions under which the owners of a certain platform are willing to allow other people to interact with their digital assets and, most importantly, constitute part of an agreement (a contract, in many ways) between user and platform. The work of demystifying these documents can help lawyers, scholars, and researchers in having a better understanding of the actions they are allowed to perform and sometimes also offer a reference point between what is supposed to happen on a platform and what is possible to observe.

At first glance, Terms of Services and user agreements may appear verbose, intricate, and dense. However, beyond the intricate legal jargon lies an interplay of restrictions and allowances that determine how platforms can be analysed, studied, and critiqued. In an age where digital platforms influence economies, democracies, and personal lives, understanding these limitations represents a precious strategic advantage.

One primary concern revolves around data access and the possibility of monitoring and understanding how the platform works. Platforms, through their ToS, often restrict how users can collect data, analyse it, or share it. For researchers aiming to study platform algorithms, behaviours or biases, these restrictions can severely impede transparency and understanding of the concrete underlying mechanisms that they are trying to observe. There is a delicate balance to strike between data ownership and platform integrity and the actual ability to study these platforms and hold them accountable in case of potential malpractices.

Can you use automated tools to analyse social media platforms?

Although it is always necessary to make important distinctions case by case, one general tendency that often characterises the prescriptions of terms of service imposed by social media platforms, is the prohibition for users to utilise automated tools when interacting with the digital infrastructure. Techniques like “web scraping” (5) or the usage of a “bot net” (6) are almost always forbidden by the ToS of the major social media platforms (More on webscraping in the chapter here). Researchers must therefore carefully weigh whether they resort to other methodologies to carry on their work.

One alternative may be the use of Application Programming Interface (API) (More on APIs in the chapters on Twitter and on TikTok) that many websites and online services provide directly to allow developers to access and retrieve data in a structured and standardised way. APIs are a more reliable and less ethically challenging method of gathering data compared to web scraping because they are designed for data exchange. Researchers can use programming languages like Python to interact with APIs and retrieve data.

Another solution can be to rely on public datasets (More on public datasets in the chapter here), which many organisations and institutions make publicly available for research and analyses so that these datasets can be used for various purposes for free. When possible, researchers may also purchase datasets from data providers and vendors, which provide datasets that are often clean, structured, and ready for analysis. Alternatively, there is always the solution of manual data entry; while it is the most time-consuming method, manual data entry involves copying and pasting information from web pages into a spreadsheet or database and can be suitable for small-scale data extraction tasks.

However, it is worth specifying that the use of automation in platform monitoring is not per se legally forbidden, notably in cases in which it is used to speed up the process of gathering publicly available data from a website and does not interfere with the regular functioning of a platform’s servers.

The legitimacy of techniques like scraping is contested. The research community may point to norms such as the Digital Services Act as a legal foundation, yet the legal argument is challenging and there are always ethical considerations that must be weighed carefully.

This goes to say that the use of automation aimed to analyse social media platforms can surely give ground, in those cases when the ToS forbids such practice, to the issuance of a ban from said platform. However, this will not necessarily translate into the insurgency of civil liability. After all, preserving the agreement between the user and the owner of a platform is something significantly different than interpreting and enforcing the law in a way that solely the jurisdictional power can do.

What to pay attention to when reading ToS?

It does not come as a surprise then, that these agreements often contain clauses pertaining to dispute resolution, often pushing disagreements into arbitration, and preventing class-action lawsuits. Such stipulations can limit the tools available for users and researchers to challenge or question platform actions, thereby creating potential power imbalances between individual users and tech-giants.

Yet, while these agreements impose limitations, they also provide insights. A meticulous examination of a platform's ToS can reveal much about its priorities, ethics, and business model. Is the platform user-centric or does it prioritise its economic objectives? Do its legal stipulations push for transparency or opacity? The answers to these questions may appear in the lines of user agreements and are essential for any comprehensive platform analysis.

The road ahead for lawyers and researchers is both challenging and fascinating. As we explore these agreements, we must develop methodologies to analyse platforms within their legal bounds while advocating for transparency, accountability, and fairness. Strategies may range from seeking amendments in these agreements to leveraging technology for compliant data collection and analysis.

1.2 Reflecting on Copyright, Intellectual Property and Trade Secret: Implications for Platform Analysis#

In the digital age, where ownership over immaterial goods grows in its importance and tireless innovation is driving progress, the rules about copyright, intellectual property (IP), and trade secrets become indicators of great significance. These are the rules upon which digital platforms are built and for anyone studying these platforms, it is crucial to understand how these legal instruments work together.

Why does copyright law matter in social media analysis?

The field of copyright law has undergone a profound transformation in the past decades. In this era of rapid information exchange, content is disseminated, reimagined, and redistributed at an unprecedented pace, presenting a myriad of challenges for platforms when it comes to ascertaining content ownership and navigating usage rights. This evolving landscape has given rise to intriguing questions, such as how one can attribute originality in a meme culture (7) and how to determine the bounds of fair use in an environment teeming with remixes and mashups.

A clear example of this comes from the many legal battles started against the practice of training generative artificial intelligence models using copyrighted material of renowned writers. As of today, for instance, there are three different lawsuits for copyright infringement lodged against the renowned company OpenAI (8).

For researchers, the intricate layers of copyright law wield a profound influence on how content on digital platforms can be utilised, analysed, and critiqued. The shifting landscape demands a nuanced understanding of copyright principles, such as the doctrine of fair use, which varies from jurisdiction to jurisdiction. This requires a careful navigation of the legal landscape to avoid the potential pitfalls of copyright infringement. Researchers must be vigilant in respecting the rights of content creators while pursuing their academic or investigative endeavours.

What can we learn about a platform from its Intellectual Property portfolio?

In the broad spectrum of legal rights pertaining intellectual property, it is possible to observe yet another dynamic aspect of the strategies implemented by big platforms.

IP rights provide a safeguard to platform's innovations, fortifying their position and guaranteeing the inviolability of their distinctive value propositions. Through this defensive role, platforms can enjoy a sense of security, knowing that their creative outputs and innovative solutions are shielded from unauthorised duplication, thereby preserving their exclusivity in the market and sustaining customer loyalty.

At the same time, intellectual property can also provide platforms with the means to assert dominance within market landscapes. With this offensive capability, platforms can strategically employ their IP rights to assert their presence and challenge competitors. By aggressively leveraging their patents, trademarks and copyrights, platforms can establish a formidable presence in their respective domains, effectively carving out their niches and dissuading rivals from encroaching on their territory. Moreover, IP rights grant platforms the authority to influence industry standards, regulatory frameworks and even the direction of the market, thereby solidifying their position as industry leaders.

For analysts dedicated to deconstructing platforms and their strategies, a comprehensive understanding of the platform's intellectual property portfolio proves indispensable. Such insight serves as a valuable observation point to the platform's inner workings, strategic priorities, and competitive positioning. By scrutinising the breadth and depth of a platform's IP holdings, researchers can discern the platform's focal points, for instance whether it is heavily invested in technological innovation or brand recognition. Additionally, the assessment of a platform's intellectual property portfolio unveils potential vulnerabilities, helping analysts identify areas where the platform might be exposed to competitive threats or imitation.

How relevant are platform's trade secrets?

Things are significantly different with trade secrets. Unlike patents, which require public disclosure and grant exclusive rights for a limited period, trade secrets have the role to protect a platform's core functionalities and innovations.

In the current technology-driven market, trade secrets serve as safeguard for the proprietary knowledge that allows big platforms to maintain their competitive advantage. These legal instruments protect a diverse set of elements that are vital for social media platforms, including their algorithms responsible for making their user experience as engaging and irresistible as possible; their proprietary data processing techniques that underpin automated decision-making and their distinctive business strategies.

Delving into the inner workings of a platform without violating its trade secrets is indeed a challenging mission. It requires a delicate balance between technical skills and legal prudence. The technical finesse required involves a deep understanding of complex algorithms and data processing methods, enabling analysts to infer a platform's functionality without direct access to its proprietary code. This often entails reverse engineering, complex data analysis and other advanced techniques that enable a comprehensive assessment of the platform's capabilities. Simultaneously, legal knowledge is paramount in ensuring that this exploration remains within the boundaries of the law. As trade secrets are legally protected, analysts must be equipped with a keen awareness of the limits imposed by trade secret laws and the need for ethical and lawful analysis.

Beyond the individual intricacies of copyright, IP and trade secrets, their convergence brings forth unique challenges and considerations. Platforms often operate in ecosystems where these legal constructs intertwine, creating an overlap of rights and restrictions that researchers must adequately navigate. Additionally, with platforms often operating across borders, the international dimensions of these laws add yet another layer of complexity.

1.3 Privacy and Data Protection Law Compliance in Analysing Social Media Platforms#

At the very core of what represents the economic strength of large online platforms there is beyond any doubt the incredible quantity of personal data that they are capable to collect and process daily. This unprecedented power has been able to transform consolidated market dynamics on a global scale.

One pressing challenge faced by researchers and analysts, who delve into the task of understanding and examining these platforms is to be able to investigate this data-rich landscape while ensuring strict adherence to privacy and data protection laws.

Is data-protection law a tool or a limit?

A foundational understanding begins with recognising the importance of individual privacy rights in the digital context. In essence, every user's interaction with a social media platform is, in many jurisdictions, considered a piece of personal data and a fragment of one's digital identity. As such, the law meticulously guards against any unauthorised or non-compliant access, use, or dissemination of this data.

The General Data Protection Regulation (GDPR) (9) of the European Union and the California Consumer Privacy Act (CCPA) (10) in the United States stand as prominent examples of legal regimes that set rigorous standards for data protection. While the specifics may vary, the core principles remain consistent: personal data must be processed lawfully, transparently and for a specific purpose; it must be minimal, accurate and stored only as long as necessary; individuals have the right to access, correct or delete their data.

For platform analysts, these laws carve out both a roadmap and a minefield. A roadmap in the sense of providing structured guidelines on lawful data processing. A minefield because inadvertent non-compliance can lead to severe legal consequences, both in terms of penalties and reputational harm.

How can we use people's personal data in a safe and ethical way?

A particular issue pertains the topic of “informed consent”. When users sign up for social media platforms, they often provide consent for data collection regulated by terms of service or privacy policies. However, this consent may not extend to third-party analyses or research and analysts must, therefore, be careful in ensuring that their data access and processing methods either fall within the coverage of existing consents or have separate, explicit permissions.

In this regard, a very inspiring practice is represented by the “data donation” (11), which is the voluntary act of individuals or organisations contributing their data for research, social good or humanitarian purposes. This concept involves people or entities choosing to share their data in various ways to support different objectives such as the research on climate, healthcare, or social media platforms.

Additionally, the rise of 'anonymised' or 'de-identified' data offers both opportunities and challenges. While such data sets, stripped of personal identifiers, can potentially be used without infringing on individual privacy rights, questions arise about the true effectiveness of anonymisation techniques, especially given the sophistication of modern re-identification methods. Moreover, it is crucial for the anonymisation procedure to take place directly on the device of the interested data subject, as it would otherwise represent very likely an actual processing of personal data.

2. Considerations on lawful and crucial support from IT experts#

Once we have delineated the main legal profiles that shape and constrain the landscape of social media analysis, we can transition from the juridical outlines to the technical aspects connected with the previous considerations. In the digital era, law and technology influence each other at a growing pace and the same is true for legal experts and information technology specialists.

IT experts, in this context, emerge as both decipherers and architects that can unravel these intricate patterns, offering insights that surpass simple interactions with a platform. Their contributions can, for instance, manifest itself in the creation of customised tools specifically engineered to enhance the precision and efficiency of social media monitoring. These instruments often may consist in specific customised analysis tools capable of performing small tasks tailored for the needs of each project.

In this section of the chapter, we will discuss some key aspects of how to document and describe the analysis operations performed on a digital platform through a technical report. Moreover, we will ponder on how a strong technical expertise in the field of IT can be leveraged to even make the platforms want to be analysed it the first place.

2.1 Best Practices in Evidence Gathering and Drafting of Technical Reports#

To sustain any form of argumentation, whether for a legal case, a journalistic inquiry or academic research, it is always of the utmost importance to be able to count on solid and consistent evidence. The same applies when analysing social media platforms, where you may need to master the art of digital forensics to gather evidence in legal and ethical manner.

Technical reports are integral components of the narratives necessary to be built around a meaningful inquiry and they must be meticulously crafted, ensuring clarity, credibility, and compliance with both the relevant legal framework and the best practices of the field.

There is a remarkable diversity of digital evidence in today's digital landscape, depending on the concrete scope of one’s research. From emails to server logs, social media interactions to encrypted messages, the digital footprint left by individuals and entities is vast and varied. Navigating this expansive domain necessitates rigorous protocols to ensure the authenticity, integrity, and admissibility of evidence.

The first and most important concept for a solid acquisition of digital evidence is the so called “chain of custody” (12), which is the order and way evidence has been handled in an uninterrupted process that can guarantee that is has not been altered. Establishing a clear chain of custody from the moment of data acquisition to its presentation is paramount and every transfer, storage or access point must be meticulously logged, ensuring that the evidence remains untainted and its origins traceable.

Preservation of the original state is key to insure the authenticity of the evidence. Tools like forensic disk imaging can capture exact replicas of digital media, preserving metadata and ensuring that the original source remains unaltered. For the selection of the best evidence gathering tools is convenient to pay special attention to the level of accuracy that they are capable of, as this will have a great impact on the credibility of our claims and their ability to stand against counterarguments.

The same level of care shall be given to the actual draft of the technical report that will describe the evidence and how the acquisition procedure looked like with a consistent level of clarity, comprehensibility, and precision. A technical report must be easily navigable, with a clear structure and layout in which there must be the description of (at least) the methodology used, findings obtained and conclusions derivable from them. The language used should ensure that even the readers unfamiliar with technical jargon can be able to follow the narrative.

Given that technical reports often find their way into legal settings where technical expertise may be limited, including a plain language summary in the introduction can bridge the gap and offer a concise and comprehensible overview of the report's key findings.

Detailing every step of the methodology used for the evidence gathering process, the characteristics of the tools used and the rationale behind each decision help in making the report reliable and allows for its reproducibility. Visual aids like graphs, charts and tables can simplify complex data sets, providing clear visual representations that enhance understanding. However, each visual aid must be accompanied by comprehensive captions and sourced appropriately to avoid the risks of misinterpretation.

Before finalising a technical report, it is advisable to subject it to peer review by external experts that can validate the methodology, highlight potential omissions, and ensure that the report stands up to scrutiny.

2.2 Ethical Hacking and Security Research Exemptions: Balancing Vulnerability Disclosure and Platform Integrity#

There is one more way to go very deep into the analysis of large online platforms, or any digital infrastructure, and benefit from a sort of exemption that facilitate the research and reduces the risks of repercussions. It is called “ethical hacking” (13) and it consists in investigating a platform to identify vulnerabilities, bugs, and other defects without a malicious intent, but to improve the security and performance of a certain place in the cyberspace.

Guiding principle of an ethical hacker, also called “white-hat hacker” is to uncover weaknesses of a platform so they can be rectified. This is the core difference with the so called “black-hat hackers”, who seek vulnerabilities to exploit them or for other malevolent reasons.

Numerous jurisdictions have begun recognising the invaluable contributions of ethical hackers. Laws have evolved, creating exemptions for security research, ensuring that well-intentioned hackers do not find themselves inadvertently on the wrong side of the law. However, these exemptions come with conditions, ensuring the research does not compromise user data, platform stability or infringe on intellectual property.

In a collaborative stride, many platforms now host Vulnerability Disclosure Programs, inviting ethical hackers to identify and report vulnerabilities and these programs provide a structured framework, often with guidelines on safe testing, responsible disclosure, and rewards for researchers. These guidelines are very important because ethical hackers, for example when probing social media platforms, risk accessing user data or exceeding the scope of the research they were supposed to conduct. Striking a balance between research comprehensiveness and user privacy is critical and ethical hackers must employ techniques to test vulnerabilities without compromising or accessing real user data.

One of the pivotal dilemmas in ethical hacking is the timing of vulnerability disclosure. While immediate disclosure seems the most transparent approach, it might leave platforms exposed until a fix is implemented. Yet, delayed disclosure might provide ample remediation time but could risk undisclosed vulnerabilities being discovered and exploited by malicious actors.

It is worth noticing that different national jurisdictions can often have differing stances on ethical hacking and security research and, for this reason, understanding and navigating these legal intricacies is imperative to ensure this form of research remains compliant even when conducted across borders. As technology advances, the realm of ethical hacking will inevitably evolve and understanding the new trends can prepare platforms and researchers for forthcoming cybersecurity landscapes.

3. Navigating the Legal Terrain of Social Media Monitoring#

As we conclude this chapter, it is important to emphasise few steps that you may follow to ensure that your efforts in monitoring online discourse remain legally sound.

Before embarking on any social media monitoring endeavour, it is essential to acquaint yourself with the relevant laws and regulations governing online activities. Each jurisdiction may have its own set of rules governing data privacy, intellectual property, defamation and more. Additionally, meticulously review the Terms of Service (ToS) of the social media platforms you intend to monitor. These agreements often contain provisions addressing data usage, scraping, and content sharing, which can significantly impact your monitoring activities. Seek the assistance of a lawyer or a legal expert that you trust, as you start detailing the framework of your research.

Respect for user privacy is a cornerstone of responsible social media monitoring. When collecting and analysing data that involves individuals, seek informed consent whenever possible. This not only aligns with ethical research practices but also mitigates potential legal concerns. Furthermore, when sharing or presenting your findings, take diligent steps to anonymize data and protect the identities of users, unless explicit consent has been granted for their inclusion.

Important note

It is always advisable to seek guidance from a lawyer or a legal expert knowledgeable of IT law and the specific characteristics of the legal system applicable in the jurisdiction in which you will operate. This can be paired with the invaluable contribution of technology experts.

As researchers exploring the expansive realm of social media monitoring, you have the fundamental role to uncover insights, influence discussions and contribute to the collective understanding of online discourse. This is a power that comes with great responsibility, and your commitment to adhering to legal and ethical principles is paramount. By being aware of the current context described in this chapter, we can navigate the legal terrain of social media monitoring with confidence ensuring that our research remains both legally sound and ethically responsible.

References#

(1) A. Mekacher1, M. Falkenberg and A. Baronchelli, The Systemic Impact of Deplatforming on Social Media, 2023

(2) https://www.nytimes.com/interactive/2023/01/13/business/what-is-shadow-banning.html

(3) https://www.oxfordreference.com/display/10.1093/acref/9780195369380.001.0001/acref-9780195369380-e-1127

(4) https://digitalservicesact.cc/dsa/art31.html

(5) V. Singrodia, A. Mitra, S. Paul, A Review on Web Scrapping and its Applications, 2019

(6) N. Kaur, M. Singh, A Review on Web Scrapping and its Applications, 2016

(7) S. Blackmore, The Meme Machine, published by Oxford University, 2000

(8) https://www.reuters.com/technology/more-writers-sue-openai-copyright-infringement-over-ai-training-2023-09-11/

(9) https://eur-lex.europa.eu/eli/reg/2016/679/oj

(10) https://oag.ca.gov/privacy/ccpa

(11) J. Ohme, T. Araujo, Digital data donations: A quest for best practices, 2022

(12) M.N.O. Sadiku, A.E. Shadare,S.M. Musa, Digital Chain of Custody, 2017

(13) Hafele, Three Different Shades of Ethical Hacking: Black, White and Gray, 2021