Garante ruling tightens rules on cookies and ai profiling for businesses

From a regulatory point of view, the Garante has established that combining tracking cookies and ai-driven profiling raises specific data protection risks and compliance obligations

Garante decision clarifies limits on profiling with cookies and AI
From a regulatory standpoint, the Italian Garante has clarified how tracking cookies combined with AI-driven profiling may lawfully be used for personalized advertising in Italy. The Authority examined implementations that paired behavioral cookies with automated profiling models. It found that some practices amounted to unlawful processing due to missing legal grounds, insufficient transparency and inadequate safeguards for data protection.

The Authority has established that consent and legitimate interest cannot be assumed in all cases where cookies feed AI models for ad targeting. Controllers must demonstrate a specific legal basis for each processing purpose. Transparency obligations require clear, accessible explanations of how tracking data are combined with automated profiling, and how those profiles influence advertising decisions.

From the perspective of corporate compliance, the decision tightens obligations for marketers, adtech vendors and platforms operating in Italy. Compliance risk is real: organisations using cookie data to train or run AI-driven profiling must reassess consent mechanisms, privacy notices and technical safeguards. Data protection impact assessments and robust data minimization are likely to be necessary where profiling produces significant effects on individuals.

The ruling underscores that technical integration between tracking systems and AI models increases supervisory scrutiny. Companies should expect regulators to evaluate both the legal basis and the operational measures that limit discrimination and protect user rights. The Authority signalled heightened attention to automated decision-making and profiling that materially affect individuals’ access to services or commercial opportunities.

1. Normative background and the ruling in question

From a regulatory standpoint, the decision rests on the GDPR compliance framework and EDPB guidance on profiling and consent. The Authority has established that deploying persistent cookies and server-side behavioural models without explicit informed consent or another valid legal basis breaches data protection rules.

The ruling targets uses of tracking that enable automated decisions or materially influence offers and access to services. It applies where profiling and automated processing are used to segment users, personalise prices or prioritise commercial opportunities.

From the Authority’s perspective, the core violation is twofold: lack of adequate information and absence of a lawful basis for automated decision-making. The Authority has established that consent obtained through opaque interfaces or inferred from behaviour does not meet the GDPR standard for informed, freely given consent.

Interpretation and implications are practical. Controllers must map processing activities that rely on persistent identifiers and server-side models. They must assess whether processing produces legal effects or similarly significant impacts on individuals. If so, controllers need a clear legal basis and, where required, explicit consent.

Compliance risk is real: companies that continue profiling without robust consent mechanisms face administrative investigations and potential sanctions. The ruling also raises obligations on transparency, record-keeping and data protection impact assessments for high-risk automated processing.

From a procedural standpoint, the Authority signalled increased supervisory scrutiny of first- and server-party tracking ecosystems. Companies should prioritise technical and organisational measures to reduce reliance on behavioural identifiers and to preserve users’ rights to opt out of profiling.

2. Interpretation and practical implications

From a regulatory standpoint, the ruling reinforces that consent frameworks which do not offer a genuine choice are inadequate. The Authority has established that simplistic website notices and cookie walls are not sufficient to legitimise profiling operations that aggregate multiple sources of personal data.

The decision highlights higher risk where organisations combine behavioural identifiers with machine learning models for ranking, segmentation or propensity scoring. Compliance risk is real: such uses require a clear lawful basis, strict purpose limitation and demonstrable data minimisation.

Practically, companies should prioritise technical and organisational measures to reduce reliance on behavioural identifiers and to preserve users’ rights to opt out of profiling. From a regulatory standpoint, firms must update records of processing activities, run targeted data protection impact assessments and formalise the legal basis for each profiling pipeline.

For implementation, consider concrete steps: limit data retention for profiling inputs, pseudonymise datasets where possible, and segregate training data from operational scoring to reduce re-identification risk. The Authority has established that documentation of these controls is essential evidence of compliance.

Failure to act increases exposure to administrative sanctions and corrective orders under data protection rules. Companies should therefore treat profiling governance as an operational priority and embed RegTech controls to monitor ongoing risk.

The following section explains specific measures legal teams and data engineers must adopt to align models and consent mechanisms with supervisory expectations.

3. what companies must do

From a regulatory standpoint, controllers and their legal teams must translate supervisory expectations into concrete operational steps. The Authority has established that superficial consent mechanisms and opaque profiling practices no longer suffice. The following measures set out practical actions companies should adopt to reduce compliance risk and align systems with regulatory priorities.

  • Document the legal basis for each processing activity. Record whether processing relies on consent, legitimate interest or contract performance. Map bases to specific profiling tasks and avoid assuming a single default basis for all profiling.
  • Provide granular cookie controls that separate essential functions from tracking. Implement an interface that lets users accept or refuse specific categories without losing access to core services. Log user choices and ensure they are enforceable across sessions and devices.
  • Clarify automated decision-making in privacy notices. Explain profiling logic, purposes and likely impacts in plain language. Include examples of decisions users may face and steps for contesting outcomes.
  • Introduce Data Protection Impact Assessments (DPIA) for combined cookie and AI-profiling systems. Identify risks, document mitigation measures and retain DPIA records to demonstrate due diligence to supervisory authorities.
  • Minimise data use and retention. Restrict personal data categories and retention periods to what is strictly necessary for stated purposes. Apply aggregation or pseudonymisation where possible to reduce identifiability.

From a practical perspective, integrate legal, engineering and product teams early in development cycles. The Authority has established that legal review after deployment is insufficient. Compliance risk is real: inadequate technical controls or poor documentation increase the likelihood of corrective measures and fines.

What should companies do next? Embed consent checks in engineering pipelines, keep audit-ready records, and update vendor contracts to reflect profiling restrictions and audit rights. Maintain a rolling DPIA review schedule and train front-line staff on how to respond to data subject requests.

Expected enforcement focus includes transparency failures, mixed legal bases for profiling and weak cookie controls. Companies that document decisions and demonstrate risk mitigation will be better positioned during supervisory reviews.

4. Risks and possible sanctions

The Authority has established that non-compliance may attract administrative fines under the GDPR, corrective orders to stop processing, and reputational harm. Regulators can also mandate independent audits and the appointment of a data protection assessor. Compliance risk is real: fines may reach up to 4% of global annual turnover or a fixed amount under the GDPR, depending on the infringement.

From a regulatory standpoint, supervisory bodies prioritise evidence of governance and remediation. Companies that document decision-making and show concrete risk mitigation will fare better in reviews. The Authority has established that weak documentation or ignored corrective orders increases the likelihood of escalated sanctions.

Practically, organisations should be prepared to provide records of processing activities, impact assessments when required, and proof of implemented safeguards. The risk landscape includes financial penalties, operational restrictions, and increased supervisory scrutiny. Enforcement trends point to more targeted interventions by regulators, with a focus on systemic failures and repeat breaches.

5. best practice for compliance

From a regulatory standpoint, enforcement trends show targeted interventions focusing on systemic failures and repeat breaches. The Authority has established that regulators expect demonstrable, ongoing governance, not one-off fixes.

Companies should translate obligations into operational controls. The guidance below prioritizes practical steps that reduce regulatory exposure and support defensible decision-making.

  1. Implement consent management platforms that record consent granularity and timestamps, enable straightforward withdrawal, and export immutably for audits.
  2. Prioritise privacy-preserving architectures such as pseudonymization, on-device processing and differential privacy where feasible, to limit identifiable data processed centrally.
  3. Maintain living impact assessments by updating data protection impact assessments when models, data sources or processing purposes change, and by logging decision rationales.
  4. Cross-train teams and centralise RoPA so marketing, product and engineering maintain shared records of processing activities and documented legal bases for high-risk processing.
  5. Standardise responses with templates for data subject rights, breach notifications and regulator inquiries, and rehearse incident playbooks with legal and technical stakeholders.

interpretation and practical implications

From a regulatory standpoint, documentation is evidence of governance. The Authority considers thorough records and repeatable processes as mitigating factors when assessing sanctions.

what companies must do

Compliance risk is real: embed RegTech to automate logs, monitor drift in models, and enforce retention schedules. Appoint accountable owners for data flows and control points.

risks and enforcement

Regulators will scrutinise gaps between policies and practice. Weak audit trails, inconsistent consent handling or untested incident plans increase the likelihood of corrective orders and fines.

practical best practices

Adopt a layered approach: technical controls to limit exposure, organisational measures to govern change, and playbooks to ensure fast, consistent responses. Regularly validate controls through internal audits and external assessments.

The Authority has established that demonstrable, repeatable controls materially affect enforcement outcomes; maintain evidence and update controls as operations evolve.

gdpr compliance requires integration of design, process and evidence

GDPR compliance requires more than labels. Companies must align technical design and business processes with data protection principles. The Authority has established that demonstrable controls influence enforcement outcomes. From a regulatory standpoint, the Garante’s recent approach signals closer scrutiny of mixed cookie and AI ecosystems.

practical implications for operations and risk

Data flows that feed profiling models demand clear legal bases and documented safeguards. Compliance risk is real: weak or cosmetic measures increase exposure to regulatory action and reputational harm. The Authority has established that repeatable, auditable controls reduce that exposure and shape enforcement priorities.

what companies should do next

From a regulatory standpoint, map data flows from cookie scripts to downstream AI models. Perform a DPIA where profiling or automated decision-making is involved. Standardise consent records, minimize data collected, and apply privacy-enhancing techniques at the design stage. Maintain evidence of decisions, testing and review cycles.

offer of practical support

If you need a pragmatic compliance checklist or assistance performing a DPIA on a cookie-fed profiling system, I can provide an actionable template tailored to your business. The template includes evidence templates, risk scoring criteria and suggested mitigation measures aligned with EDPB guidance and GDPR compliance best practice.

From a regulatory standpoint, maintaining updated controls and traceable evidence will materially affect enforcement outcomes as operations evolve.

Scritto da Dr. Luca Ferretti

Best Milan neighborhoods for luxury investors in 2026

Ai-driven attribution for measurable funnel optimization and better roas