The EU AI Act Was Written for You. Most Coverage Forgot That. Every article about the EU AI Act talks about what companies need to do before August 2. Conformity assessments. Technical documentation. EU representatives. Annex III checklists. That framing makes sense if you're a compliance officer. It misses the point of the law entirely if you're the person the law was written to protect. The EU AI Act is a consumer protection framework. The compliance burden on businesses exists because the law is trying to protect you from specific things. Understanding what those things are, and what you can actually do when something goes wrong, is more useful than most of what's been written about this legislation. ***What the Act bans outright Some AI applications are prohibited entirely. They cannot be deployed in the EU, period. No compliance path, no exception for business cases. The banned list is worth knowing because these are things that could be done to you without your knowledge, and the Act says they cannot be. Social scoring. AI systems that evaluate people based on their behavior, social relationships, or personal characteristics to give them a score that affects their access to services, opportunities, or treatment. The model China uses for its social credit system. Banned. Subliminal manipulation. AI that uses techniques operating below your conscious awareness to influence your behavior or decisions in ways that harm your interests. This is distinct from persuasion, which is legal. The line is whether the technique bypasses your ability to notice and resist it. Exploiting vulnerabilities. AI systems that target people based on age, disability, or economic hardship to influence their behavior in ways that damage their interests. A system that identifies people under financial stress and manipulates them into decisions they would not make otherwise is banned. Real-time biometric surveillance in public. AI that identifies you from your face, gait, or other biometrics in public spaces in real time is prohibited. There are narrow law enforcement exceptions for things like finding a missing child or preventing a specific terrorist threat. The general case, a camera system that continuously identifies and tracks everyone passing through a public space, is not allowed. Emotion recognition in the workplace and in schools. AI that infers your emotional state from your face, voice, or behavior is banned in employment and educational contexts. An employer cannot deploy a system that reads your mood during a video call. A school cannot use software that monitors students' emotional engagement. These are not disclosure requirements. They are prohibitions. If a company uses any of these systems on you in the EU, they are violating the law. ***What you get when AI makes a decision about you Most of the EU AI Act's protections apply to a specific category: high-risk AI systems. These are systems defined in Annex III of the Act, and they cover the decisions that matter most to your life. The list includes: AI used in hiring and CV screening. AI used in credit scoring and insurance underwriting. AI used in decisions about access to public benefits. AI used in educational admissions or assessments. AI used to evaluate employees, allocate tasks, or monitor performance. If an AI system in one of these categories makes or significantly influences a decision about you, you have three specific rights. The right to know. The company must tell you that an AI system was involved in the decision. You should not have to guess whether a human or a machine rejected your loan application or your job application. The right to explanation. You can ask for a meaningful explanation of how the decision was made. Not a generic disclaimer. An explanation of what factors the system weighted, what data it used, and how it reached its conclusion about you. The right to human review. You can request that a human review the decision. The company is required to have a human oversight mechanism in place. They cannot hide behind "the algorithm decided" as a final answer. These rights apply to decisions that significantly affect you. A loan rejection. A job application rejection. An insurance denial. An academic admission decision. The moments where an incorrect or biased AI output has real consequences for your life are exactly the moments the Act is designed to give you recourse. ***What you must be told, regardless Separate from the high-risk category, the Act creates transparency obligations that apply broadly. If you are talking to a chatbot or AI assistant, you must be told it is an AI. Companies cannot deploy conversational AI designed to make you believe you are talking to a human. If you are looking at content that was generated or significantly manipulated by AI, including deepfake images, AI-generated video, or synthetic audio, it must be labeled as such. The obligation applies to both the creators and the platforms distributing it. If a company is using an AI system to make recommendations that affect you, and that system falls under the limited-risk category, they must explain what the system does and how it affects the output you see. The transparency requirements do not give you the right to opt out. They give you the right to know. What you do with that information is up to you. ***The gap between what the law says and what you can do today The rights described above are real. Exercising them is, right now, complicated. The enforcement infrastructure is incomplete. Only 8 of 27 EU member states have their national supervisory authorities fully operational for the August 2 deadline. These are the bodies you would contact if a company violated your rights under the Act. If your country's authority is not yet functional, your complaint goes into a queue. The process for claiming your rights is not yet standardized. The right to explanation exists, but the procedure for requesting one has not been formalized across all sectors. You may need to submit a written request, cite the specific article of the regulation, and wait for a response that may not come in a useful timeframe. The law also does not require companies to proactively inform you of everything. You may not know that the hiring platform, the insurance portal, or the benefits system used a high-risk AI to evaluate you unless they tell you upfront or you ask. The right to explanation is triggered by your request, not by automatic disclosure. This is not an argument against the rights. It is an honest description of where we are in August 2026. The legal framework is solid. The practical infrastructure for using it is still being built. ***What you can actually do now Ask directly. When a decision goes against you, ask whether an AI system was involved. Companies subject to the Act are legally required to answer this truthfully. If they say yes, ask for an explanation. If they decline, that is worth documenting and potentially worth escalating. Request human review. For any significant decision, you are entitled to ask that a human look at your case. Do this in writing. Keep the record. If the company refuses or provides a human review that is clearly perfunctory, you have grounds for a complaint. File with your national authority. EU member states are building their supervisory authorities now. The European AI Office publishes information about which national authority handles which sector. When one is operational in your country, a documented complaint, even if slow to resolve, creates the record that enforcement actions are eventually built on. Know which decisions matter. The protections are strongest for the decisions with the most consequence: credit, employment, insurance, education, benefits. If you are in a process that involves any of these and you are in the EU or dealing with a company operating in the EU, your rights under the Act are in play. ***The EU AI Act was not written to protect OpenAI from a fine. It was written because AI systems are making consequential decisions about people's lives, often invisibly, often without recourse. The compliance burden on companies is the mechanism. You are the reason. That context is worth keeping in mind, especially as August 2 approaches and most of the coverage remains focused on what businesses need to file. ***Marco Kotrotsos, specializing in practical AI implementation for organizations ready to close the gap between AI hype and AI value. With 30 years of IT experience now focused purely on AI deployment, he works hands-on with companies to turn AI potential into measurable business outcomes. My free substack about practical AI called Autocomplete can be found here: https://acdigest.substack.com. I have another Medium publication where I write about life, personal relationships, parenthood and health from my own perspective. https://medium.com/@strongerafter
8 min read