🪪 Age Checks Go AI
The UK's Online Safety Act, Photoshop's Fresh Features, and Building Better Tools
“True innovation will come not from new technologies, but from new ways of collaboration.”
—Thomas Malone, Professor and Business Theorist
The AI Breakdown
The UK’s New Internet Rulebook

On July 25, the United Kingdom began enforcing the Online Safety Act, a sweeping law that requires digital platforms to verify users’ ages before allowing access to certain online content.
The goal is a noble one: to protect children from exposure to high-risk, harmful, and age-inappropriate material. But the method of enforcement is raising some eyebrows.
Instead of traditional age checks like self-declared birthdays or manual ID uploads, the UK now requires platforms to use artificial intelligence. AI-powered tools—such as facial age estimation and behavioral pattern recognition—are responsible for determining whether users can access specific content.
How It Works
The law applies to any service that allows user interaction or content sharing. This includes social networks, search engines, messaging apps, gaming platforms, and more.
To comply, platforms must verify user age using one or more of the following:
Facial analysis performed by AI models
Metadata checks involving email history, mobile accounts, or payment records
Document scans of government-issued IDs
They deliver a binary result: either the user is old enough or not.
Noncompliance carries significant consequences. Platforms can be fined up to 10% of global revenue, and in cases of repeated violations, senior executives could be held personally liable.
Restrictions Abroad
Although the law applies only in the UK, its influence is spreading.
YouTube recently announced it will begin using machine learning in the U.S. to estimate user age based on viewing behavior.
Google is also expanding their AI models to analyze search queries, viewing habits, and account metadata to infer whether a user is under 18.
This marks the first time platforms have deployed large-scale age verification technology in a market without a legal mandate, and it reflects how some are choosing to adopt the most stringent standards as a baseline for global policy.
A Look Ahead
AI is now being given the responsibility to enforce policy, not just support engagement. That includes who gets to see what, when, and why.
YouTube’s move suggests other platforms may follow.
For businesses that operate online, this is a moment to take notice and prepare for a future where AI may be expected to help satisfy legal and ethical standards, not just optimize customer journeys.
Top Tools
Photoshop Upgrades: Harmonize, Upscale & Remove
Adobe has launched a new batch of AI features for Photoshop that make advanced photo editing even faster and more approachable.
The biggest addition is a tool called Harmonize. It automatically blends new objects or people into existing photos, adjusting lighting, shadows, and colors so everything looks like it belongs. What used to take serious editing skills can now be done with a couple of clicks.
The update also includes:
Smarter Object Removal: The improved tool now removes unwanted elements with cleaner fills and better precision. Crucially, it also learned to stop adding random content in place of what you delete.
AI Upscaling: Boosts low-res images up to eight megapixels without losing clarity. Great for restoring old photos or resizing for different platforms.
Adobe is also adding a layer of transparency to the process. Images edited with these features include optional “Content Credentials”: metadata that shows what was changed and how. It’s designed to support responsible editing and build trust around how images are made.
These features are available in beta for Photoshop on desktop and web, with more on the way. Harmonize is also available early on Photoshop’s iOS app.
Prompt of the Week
Ownership doesn’t happen because you tell people to care more. It happens when the environment makes initiative the default and accountability the norm. If you want your team to take more responsibility, start by shifting how expectations are set, how progress is discussed, and how outcomes are connected to real impact.
That means talking about goals in terms of results, not tasks. It means being transparent with your own wins and losses. It means asking questions that encourage forward thinking instead of passive reporting. And it means stepping back just enough to let others step up.
If you want to explore new ways to communicate that shift, here’s a simple prompt you can run in ChatGPT:
Act as a leadership coach. I want to communicate expectations more clearly and build a stronger sense of ownership across my team. Suggest talking points, weekly questions, or 1:1 coaching prompts that encourage initiative and accountability without micromanaging.
The more clearly you define the lane, the more confidently people drive in it.
Hear from the Experts
In an industry chasing the next big thing, Tina Cuatto says the real game changer is simple: better vendor communication.
In this Auto Collabs conversation, we welcome back Tina Cuatto from DealerOn who tackles hot topics like the pressure AI is putting on product teams, the hidden dangers of moving too fast without guardrails, and building features that support real dealer decisions.
Tune in as Tina break down what most vendors get wrong about AI and how to build what actually works.
Bits and Bytes
The U.S. SEC announced the creation of an artificial intelligence task force on Friday. 🦾
New ads in the August issue of Vogue feature AI-generated models, sparking a wave of outrage on social media. đź“–
OpenAI’s agent was observed checking one of those “I am not a robot” boxes used by websites to verify your humanness. 🤥
McDonald's says they intend to "double down" on their artificial intelligence investments by 2027. 🍔
Reply