
WORLD (Enmaeya Feature) - November 3, 2025
In the streets and homes of Beirut, citizens have grown accustomed to turning to artificial intelligence for quick answers to their daily problems: a mysterious headache, a sudden financial crisis, or urgent legal advice. For a while, ChatGPT seemed like the perfect advisor, capable of answering any question around the clock. But the new reality reveals that such reliance carries significant risks.
On October 29, 2025, OpenAI announced that ChatGPT would no longer provide specific medical, legal, or financial advice, officially transforming it into an educational tool. Instead of offering direct recommendations, the AI now explains general principles and encourages users to consult doctors, lawyers, or financial experts before making any major decisions.
This decision came after multiple cases of misuse, including incorrect medical diagnoses, misleading financial recommendations, and inaccurate legal advice, alongside potential legal risks for major tech companies.
AI Is Not Your Doctor
In an exclusive interview with Enmaeya, digital transformation and cybersecurity expert Roland Abi Najem warned of the major risks associated with using ChatGPT for medical, psychological, or financial guidance.
“Until recently, there were many problems caused by using ChatGPT in these areas,” Abi Najem said. “When we talk about medical advice, we mean health or mental health issues. ChatGPT is neither a doctor nor a therapist and may sometimes provide inaccurate information based on incorrect data from the user”.
He added, “For example, if you tell ChatGPT that you have a migraine, it may give recommendations based on that information, while the actual cause could be entirely different. Relying on AI in these areas can therefore be misleading and dangerous”.
Your Money at Risk
Abi Najem also highlighted the financial risks: “In financial matters, such as investing in cryptocurrencies, real estate, or gold, ChatGPT may provide incorrect information, which could lead to losses if users rely on it without verifying official sources”.
Digital Hallucinations
The expert explained one of AI’s most prominent problems, known as “hallucinations,” where the system generates answers that are sometimes false and unsupported by any credible source. “In medical or financial fields, these hallucinations can be extremely harmful and may lead to serious mistakes”.
Risks of Sharing Personal Data
Abi Najem warned that conversations involving health or financial information could become public or be published online if not securely stored, exposing users to serious risks, including identity theft or legal liability.
While ChatGPT remains a powerful educational tool for researching and understanding principles, it is no longer a direct advisor. Experts caution against relying on it for major life decisions, emphasizing that AI is not a substitute for human expertise and that consulting professionals and verifying official sources is always essential.








