AI generated ads are quietly blending into our daily social media feeds, making it harder than ever to tell what is real and what is created by machines, raising serious concerns about transparency and trust online.
The rise of AI generated ads is changing how people experience content on social media, and not everyone is comfortable with it. Many users today find themselves questioning whether what they see is real or created by artificial intelligence, especially on platforms like TikTok where visual content moves quickly and blends seamlessly into everyday browsing.
One growing concern is the lack of clear disclosure. While AI generated ads are becoming more advanced and realistic, the information about how they are made is not always shared openly. This creates confusion for viewers who try to identify whether a video or image is authentic or machine-generated.
A recent example highlights this issue involving Samsung and its promotional campaigns. The company has been seen using AI generated ads to promote features like the Galaxy S26 Ultra’s privacy display. Interestingly, similar promotional videos published on platforms like YouTube include small disclosures mentioning the use of AI tools. However, when these same ads appear on TikTok, that information is often missing.
This inconsistency raises an important question: if companies know they are using AI generated ads, why not clearly inform users everywhere?
Both Samsung and TikTok are part of the Content Authenticity Initiative, which aims to improve transparency in digital content. This initiative promotes standards such as C2PA, designed to help users identify the origin and authenticity of media. In theory, this should make AI generated ads easier to recognize. In reality, the system does not seem to be working as expected.
From a user’s point of view, this lack of transparency can feel misleading. People who spend time analyzing content often look for small signs that something is AI-generated, such as unnatural movements or visual inconsistencies. But as technology improves, these signs are becoming harder to detect. Without proper labels, even experienced viewers can struggle to tell the difference.
In our opinion, the issue is not about using AI in advertising. AI generated ads can be creative, efficient, and even entertaining. The real problem lies in honesty and communication. If brands and platforms want users to trust them, they must be clear about how content is created.
Another concern is responsibility. When AI generated ads appear without labels, it becomes unclear who is accountable. Is it the brand that created the content, or the platform that distributes it? Ideally, both should take responsibility. Companies should disclose their use of AI, and platforms like TikTok should ensure that this information is visible to users.
There is also a broader impact on digital trust. Social media has already faced challenges related to misinformation and manipulated content. The rise of AI generated ads adds another layer of complexity. If users begin to feel that everything they see could be artificial, it may reduce their confidence in online content overall.
To improve the situation, stronger enforcement of AI disclosure policies is needed. Platforms should make it mandatory for advertisers to clearly label AI generated ads, and these labels should be easy to notice, not hidden in descriptions or metadata. At the same time, companies should adopt transparent practices as part of their brand identity.
Looking ahead, AI will continue to play a major role in digital marketing. There is no doubt about that. However, the success of AI generated ads will depend on how responsibly they are used. Transparency should not be treated as an optional feature but as a basic requirement.
In the end, users deserve to know what they are watching. Clear labeling of AI generated ads is not just a technical issue—it is a matter of trust. If companies and platforms truly support transparency, their actions should reflect that commitment in every piece of content they share.