Skip to content
Skip to content
  • HOME
  • AI
    • AI RESEARCH
    • AI LIFESTYLE & INTERACTION
  • LEARN TO AI
  • AI IN IT
    • TECH RESEARCH IN AI
  • SOCIAL & DIGITAL AI
    • SOCIAL & DIGITAL AI RESEARCH
  • NEWS

NEWS

“Digital and Social Media & Artificial Intelligence Technology News offers a clear lens on how AI is transforming social platforms, content creation, and the digital ecosystem for professionals and enthusiasts alike.”

Generative AI Ethics: The Gen AI Kool-Aid Tastes Like Eugenics

March 21, 2026 by Prof. Mian Waqar Ahmad Hashmi
generative ai ethics the gen ai kool aid tastes like eugenics

Generative AI ethics is becoming a major concern as we uncover how AI systems can reflect bias, spread harmful content, and shape the future of technology.

The conversation around generative AI ethics is growing fast as more people begin to question what these tools are really doing behind the scenes.

When OpenAI introduced its video-generation tool Sora in 2024, it caught the attention of creators across the world. Among them was filmmaker Valerie Veatch, who felt both curious and excited about the new possibilities. Like many others, she joined online communities where people were experimenting with generative AI and sharing creative results.

At first, everything looked inspiring. But as she spent more time exploring, a different reality started to appear.

Veatch noticed that the system often produced biased content, including racist and sexist visuals, even when users did not ask for such outputs. This raised serious concerns about generative AI ethics and how these systems are trained. What surprised her even more was the lack of reaction from others. Many users continued to celebrate the technology without questioning its harmful patterns.

This experience slowly changed her perspective.

Instead of continuing her experiments, Veatch stepped back and decided to understand the bigger picture. This journey led her to create the documentary Ghost in the Machine. The film does not focus on futuristic promises. Instead, it looks at the history of artificial intelligence to explain why modern generative AI behaves the way it does. A key message in her work is that people often use the term “artificial intelligence” without truly understanding it.

 

 The phrase was introduced in 1956 by John McCarthy, mainly to attract research funding. Over time, it became a powerful label that shaped public thinking, even though its meaning remains unclear.

The documentary also connects today’s technology with older ideas that influenced data and science. One example is the work of Francis Galton, who promoted the theory of eugenics. These beliefs, which supported harmful and discriminatory views, played a role in shaping early systems of classification and data analysis.

By showing this history, Veatch highlights an important point. The issues we see in generative AI today are not random. They are connected to past ideas and human biases that have been carried forward into modern systems.

This is why generative AI ethics has become such an important topic. As the technology grows, more people are starting to ask difficult questions. How are these systems trained? What kind of data is being used? And who is responsible when harmful content is created?

Her film encourages viewers to slow down and think more deeply. Instead of getting caught in the excitement, it asks people to look at both the benefits and the risks of AI.

In a time when AI is moving quickly and shaping many industries, understanding generative AI ethics is no longer optional. It is necessary for building a future where technology is fair, responsible, and truly useful for everyone.

Categories NEWS Tags AI community, AI controversies, AI history, AI industry trends, AI technology impact, AI text to video, artificial intelligence, Ethical AI, generative AI, Generative AI risks, Ghost in the Machine documentary, machine learning bias

AI Regulation Policy: Trump Plan and Key Changes

March 21, 2026 by Prof. Mian Waqar Ahmad Hashmi
us ai regulation policy trump plan explained https://worldstan.com/ai-regulation-policy-trump-plan-and-key-changes/
The new AI regulation policy in the United States signals a shift toward fewer restrictions and more focus on growth, while still addressing key concerns like child safety, deepfakes, and the country’s push to stay ahead in global AI development.

The debate around AI regulation policy in the United States is taking a new direction after the Trump administration introduced a detailed plan that focuses more on growth than strict control. The proposal outlines a strategy where the federal government keeps regulation limited while still addressing a few key risks, especially those involving children and emerging digital threats.

Instead of placing heavy restrictions on artificial intelligence, the plan encourages lawmakers to be cautious and avoid rules that could slow down innovation. At the same time, it makes it clear that a unified national approach is important. It suggests that individual states should not create separate laws that could interfere with a broader US AI strategy aimed at maintaining global leadership.

One of the central ideas in this AI regulation policy is protecting younger users. The proposal supports stronger safety steps for minors using AI platforms. This includes better age verification methods and limits on how companies use children’s data, especially for targeted advertising or training AI systems. However, it stops short of banning these practices completely, choosing instead to introduce controlled limits.

The plan also touches on the growing pressure that AI infrastructure can put on energy systems. With large-scale AI models requiring significant computing power, there is concern about rising electricity costs. Lawmakers are encouraged to consider solutions that can prevent sudden increases in energy demand while still supporting the expansion of AI technologies.

Another important area is education and workforce development. The proposal highlights the need for better training and skill-building programs so that people can become more familiar with AI tools. While the idea is mentioned clearly, the document does not go into deep detail about how these programs would be implemented.

When it comes to legal questions, especially around using copyrighted material to train AI models, the approach remains cautious. Rather than making immediate decisions, the plan suggests waiting to see how the legal landscape develops before introducing firm rules.

The issue of deepfakes and digital identity is also addressed. As AI-generated videos and voice clones become more realistic, the policy points toward creating a federal legal framework to protect individuals from unauthorized use of their likeness, voice, or identity. At the same time, it stresses that such laws should not limit free speech, allowing space for satire, parody, and news reporting.

The proposal also reflects ongoing concerns about overregulation. It advises against creating unclear rules or broad liabilities that could lead to unnecessary legal battles. The goal is to keep the environment stable for companies while still addressing major risks linked to AI use.

Importantly, this AI regulation policy is still just a proposal. It will only become effective if Congress reviews, approves, and passes it into law. Until then, it remains a blueprint that signals how the US may balance innovation, safety, and global competition in the fast-moving world of artificial intelligence.

Categories NEWS Tags AI child safety, AI content moderation, AI copyright, AI deepfakes, AI education, AI ethics, AI governance, AI infrastructure, AI law, AI legislation, AI policy blueprint, AI privacy, AI regulation, AI regulation policy, Trump AI policy, US AI strategy

Gemini AI Task Automation: Future of Mobile AI

March 21, 2026March 21, 2026 by Prof. Mian Waqar Ahmad Hashmi
gemini ai task automation https://worldstan.com/gemini-ai-task-automation-future-of-mobile-ai/
Gemini AI task automation is starting to show what it really means for a phone to handle tasks on its own — this hands-on look explains how it works, where it struggles, and why it still feels like an early but important step toward the future of everyday smartphone use.

Gemini AI task automation is slowly turning smartphones into something much smarter than we are used to today. It is still early, but the experience already feels like a small preview of what the future of mobile AI could look like.

I recently tried Google’s Gemini AI automation feature on two flagship devices, the Pixel 10 Pro and the Galaxy S26 Ultra. For the first time, an AI assistant is not just giving suggestions — it is actually using apps and completing tasks on your behalf. Right now, this feature is limited and only works with a few services like food delivery and ride-hailing apps, but the concept itself is powerful.

At this stage, Gemini AI is not faster than a human. In fact, it often feels slow and sometimes struggles with simple actions. If you are in a hurry and need to book a ride or order food instantly, doing it yourself is still the better option. However, speed is not the main idea behind this technology.

The real purpose of Gemini AI task automation is convenience. It is designed to handle tasks in the background while you focus on something else. You can start a task and let the AI assistant continue working, even if

you are not actively looking at your phone. That small shift changes how we think about using smartphones.

When you choose to watch it in action, the process becomes quite interesting.

 

Gemini shows step-by-step updates on the screen, explaining what it is doing. For example, while placing a food order, it can read menu options, understand portion sizes, and make logical decisions. In one case, it correctly selected two half portions to match a full meal request, which shows that the AI can adapt in real time.

Still, it is not perfect. There are moments when the system misses obvious things on the screen or takes longer than expected to complete a simple step. Watching it search for an item that is clearly visible can feel frustrating. These small issues remind you that the technology is still in development.

Even with these flaws, the overall experience stands out. This is not a staged demo or a polished presentation — it is a real AI assistant working on an actual phone. That alone makes it different from what we have seen before in the world of smartphone AI.

Gemini AI task automation may not solve major problems today, but it introduces a new way of interacting with devices. As the system improves, becomes faster, and supports more apps, it has the potential to change everyday mobile use completely.

For now, it feels like an early step. But it is an important one, showing that the future of AI assistants is not just about answering questions — it is about getting things done for you.

Categories NEWS Tags AI app control, AI assistant, AI automation, AI task automation, food delivery automation, Galaxy S26 Ultra, Gemini AI, Gemini AI task automation, Gemini beta, Google Gemini, Pixel 10 Pro, smartphone AI, Uber Eats AI

Google Fitbit AI Health Coach Uses Medical Records

March 19, 2026 by Prof. Mian Waqar Ahmad Hashmi
google fitbit ai health coach uses medical records

A new AI health coach is changing how people manage their daily health by combining fitness tracking with real medical insights. This update brings a smarter and more personal way to understand your body, using your own health data to guide better lifestyle choices.

Google is taking a big step in digital healthcare by improving its AI health coach, making it smarter and more helpful for everyday users. With this latest update, the AI health coach is no longer limited to basic fitness tracking. It can now understand medical records, daily habits, and wearable health data to give more meaningful and personal health advice.
This new Google Fitbit update shows how AI healthcare technology is changing the way people manage their health. Instead of just counting steps or tracking calories, the AI health coach can study different types of data, including sleep tracking, heart rate, and past health records. By combining all this information, it offers personalized health advice that feels more relevant to each user’s lifestyle.
One of the most important parts of this update is medical records integration. With user permission, the system can connect with medical data and turn it into easy insights. This helps the AI health coach act more like a virtual health assistant rather than just a fitness tool. It can suggest better routines, highlight possible health risks, and guide users toward healthier choices.
At the same time, Google is focusing on health data privacy. Since medical data sharing is sensitive,
 
 the company is working to make sure that user information stays secure and under control. This balance between smart features and safety is important as more people rely on smart health devices and digital health tracking in their daily lives.
Another area where the AI health coach is improving is sleep tracking accuracy. Fitbit has already been known for its sleep features, but now AI medical insights help users better understand their sleep patterns and how they affect overall health. This makes the Fitbit app features more useful for people who want a complete picture of their well-being.
This update also reflects broader healthcare AI trends. Companies are moving toward systems that not only track data but also understand it. With AI wellness recommendations and deeper analysis, users can get advice that feels closer to real human guidance.
Overall, the AI health coach is becoming a central part of modern health management. As Google health intelligence continues to grow, tools like this could change how people think about fitness, wellness, and medical care—making health support more accessible, personal, and easy to use every day.
Categories NEWS Tags AI fitness coach, AI health coach, AI healthcare, digital health tracking, Fitbit AI, Google Fitbit update, medical records integration, personalized health advice, wearable health data

No, ChatGPT Was Not Responsible for Treating Dog Cancer

March 19, 2026March 19, 2026 by Prof. Mian Waqar Ahmad Hashmi
ai treatment dog cancer https://worldstan.com/no-chatgpt-was-not-responsible-for-treating-dog-cancer/

AI cancer treatment is opening a new chapter in how doctors understand and fight cancer, using smart technology to create more personal, faster, and more effective care for both humans and even pets.

AI is slowly changing how doctors and researchers understand and treat serious diseases, including cancer. One of the most interesting areas right now is AI cancer treatment, where new tools are helping experts find better and more personalized ways to fight the disease. This progress is not only helping humans but is also opening new doors in dog cancer treatment and overall veterinary cancer research.

Recently, scientists have been using AI in healthcare to study cancer at a much deeper level. With the help of tools similar to ChatGPT medical use, researchers can quickly review large amounts of medical data. This allows them to spot patterns that would normally take years to discover. As a result, AI-assisted diagnosis is becoming faster and more accurate, giving doctors a better chance to detect cancer early.

One of the biggest breakthroughs in AI cancer treatment is the use of personalized mRNA vaccine technology. Instead of using a one-size-fits-all approach, doctors can now design treatments based on a patient’s specific condition. By combining genetic profiling cancer techniques with AI in medicine, researchers can understand how each tumor behaves and create targeted therapies. This is a major step forward in precision medicine AI.

In addition, AI is playing a key role in immunotherapy for cancer. It helps scientists predict how the immune system will respond to certain treatments. This makes experimental cancer treatment safer and more effective. AI drug discovery is also speeding up the process of finding new medicines, which means patients may get access to better treatments much sooner than before.

Interestingly, these advancements are not limited to humans. In pet cancer care innovation, researchers are applying the same AI tools to help animals. Dogs, for example, are now receiving advanced dog cancer treatment based on similar methods used in humans. This not only improves their quality of life but also helps scientists learn more about cancer in general.

Another powerful tool supporting AI cancer treatment is AlphaFold protein AI, which helps scientists understand protein structures. This knowledge is important because proteins play a key role in how cancer grows and spreads. With better understanding, researchers can design more effective treatments.

Overall, AI cancer treatment is changing the future of medicine. From faster diagnosis to personalized therapies and better outcomes for both humans and animals, AI in healthcare is proving to be a game changer. As research continues, we can expect even more improvements in how cancer is treated, making hope stronger for patients and their families.

Categories NEWS

Nvidia DLSS 5 Redefines AI Game Graphics

March 17, 2026 by Prof. Mian Waqar Ahmad Hashmi
Nvidia DLSS 5 Redefines AI Game Graphics

Nvidia DLSS 5 is changing how game graphics are created, using generative AI to make visuals more realistic while also raising questions about how much it may change the original look and feel of games.

Nvidia DLSS 5 is making headlines after its announcement at the GTC conference, where the company introduced a new step forward in AI-powered graphics. The update is already creating mixed reactions in the gaming world, as it brings both impressive visual improvements and new concerns about creative control.

With Nvidia DLSS 5, the company is moving beyond traditional upscaling. Earlier versions focused on improving performance by using machine learning to sharpen lower-resolution images. Now, Nvidia DLSS 5 takes a different path by using generative AI to actively rebuild parts of a scene. This means lighting, shadows, and materials are no longer just enhanced—they are partially recreated to look more realistic.

Nvidia CEO Jensen Huang described Nvidia DLSS 5 as a major turning point for graphics. According to him, it combines human-made rendering with AI-generated details to deliver a new level of visual realism while still giving artists control over their work. This vision highlights how AI is becoming deeply connected with modern game design.

In supported games, Nvidia DLSS 5 shows clear changes in how environments and characters appear. Demonstrations from titles like Resident Evil Requiem, Starfield, Hogwarts Legacy, and EA Sports FC reveal richer lighting, smoother textures, and more lifelike surfaces. Elements such as skin, hair, and fabric respond to light in a more natural way, making scenes feel closer to real life.

However, not everyone is convinced. 

Some early reactions suggest that Nvidia DLSS 5 may go too far by altering the original artistic style of games. Critics compare these changes to other AI-generated visuals seen in photography and video, where the final result sometimes feels artificial or over-processed.

Nvidia explains that Nvidia DLSS 5 works by training AI models to understand complex scenes in detail. The system studies how light interacts with different materials and how objects behave in various conditions. Using this understanding, it generates new visual details while trying to keep the original structure of the scene intact.

This approach shows how the future of gaming graphics is evolving. Nvidia DLSS 5 is not just about making games run faster or look sharper—it is about redefining how visuals are created in real time. As generative AI continues to grow, tools like Nvidia DLSS 5 could become a standard part of game development.

At the same time, the debate around Nvidia DLSS 5 highlights an important question for the industry. As AI becomes more involved in creative processes, developers and players will need to decide how much change is acceptable and where to draw the line between enhancement and artistic integrity.

Overall, Nvidia DLSS 5 represents both innovation and uncertainty. It offers a glimpse into the future of AI in gaming, where technology can transform visuals in powerful ways, but also challenges the balance between realism and original design.

Categories NEWS Tags AI gaming revolution, future of gaming AI, generative AI visuals, next-gen game graphics, Nvidia DLSS 5

AI Training Human Emotion Using Real Actors

March 16, 2026March 16, 2026 by Prof. Mian Waqar Ahmad Hashmi
AI Human
AI companies are now working with actors to teach machines how real human emotions look and sound, helping artificial intelligence respond to people in a more natural and human way.

Technology companies are now exploring new ways to help artificial intelligence better understand people. A growing trend in the AI industry shows that companies are working with actors and creative professionals to teach machines how real human emotions look and sound. This effort is part of a wider push around AI training human emotion, which aims to make AI systems respond in a more natural and human-like way.

Many AI companies rely on large sets of AI training data to build and improve their systems. While machines are good at processing numbers and text, they still struggle with emotions such as happiness, frustration, surprise, or sadness. To solve this challenge, some AI labs are hiring improv actors who can perform different emotional reactions in a natural way. Their performances help create more accurate human emotion datasets used for AI models training.

These actors are asked to express a wide range of feelings and quickly switch between them during recordings. The goal is to capture emotional tones, facial expressions, body language, and voice changes that people use in everyday conversations. This information becomes specialized AI training data that researchers use to improve emotion recognition in machines.

Experts say that emotional understanding is an important

step for the future of conversational AI. When AI systems can recognize emotions, they can respond more carefully in customer service, digital assistants, education tools, and healthcare support systems. Because of this, AI emotion recognition and AI conversational training are becoming key research areas. The work also shows how creative professionals are finding new roles in the growing AI workforce. Instead of traditional acting jobs, many performers now contribute to AI behavior modeling and AI authenticity training. Their skills help researchers build systems that better understand how humans communicate.

Despite these advances, AI still has limitations when it comes to emotional intelligence. Machines can learn patterns from data, but they do not truly feel emotions the way humans do. Researchers continue studying how to improve AI emotional intelligence training so that future systems can interact with people in more thoughtful and respectful ways.

As the technology develops, the collaboration between AI labs and creative professionals may become more common. By combining technical research with human expression, the industry hopes to build AI models that understand not only words, but also the emotions behind them.

Categories NEWS Tags AI data labeling jobs, AI emotion recognition, AI emotional intelligence, AI industry hiring actors, AI labs training models, AI model improvement, AI models training data, AI technology news, AI training data industry, AI training human emotion, human emotion dataset, improv actors AI training
Older posts
Page1 Page2 … Page8 Next →

RECENT POSTS:

  • Generative AI Ethics: The Gen AI Kool-Aid Tastes Like Eugenics
  • AI Regulation Policy: Trump Plan and Key Changes
  • Gemini AI Task Automation: Future of Mobile AI
  • Google Fitbit AI Health Coach Uses Medical Records
  • No, ChatGPT Was Not Responsible for Treating Dog Cancer

CATEGORIES:

  • AI
  • AI IN IT
  • AI LIFESTYLE & INTERACTION
  • AI RESEARCH
  • LEARN TO AI
  • NEWS
  • SOCIAL & DIGITAL AI
  • SOCIAL & DIGITAL AI RESEARCH
  • TECH RESEARCH IN AI
  • Facebook
  • Instagram
  • YouTube
  • LinkedIn
© 2026 • Built with GeneratePress