Inside the AI World: Profiling, weapons, 'Israel' & people pleasing
A deep dive into AI's evolving power, from ChatGPT's friendly face to its role in warfare, surveillance, and subtle social conditioning. What happens when the world's most agreeable tool becomes the most influential one?
-
Inside the AI World: Profiling, weapons, 'Israel' & people pleasing (Illustrated for Al Mayadeen English by Mahdi Rteil)
Hi there! 😊 What’s on your mind today?
You have probably seen this phrase pop up often, especially as the world becomes increasingly reliant on AI, most notably OpenAI’s ChatGPT. Individuals and organizations are now using AI for a wide array of tasks: from work and research to art and beyond. Some people turn to AI for professional purposes, while others seek personal support. Interestingly, without even realizing it, many of us share personal thoughts with AI.
What is truly fascinating is AI’s role as an enabler. It can provide reassurance to individuals grappling with mental health challenges, aid in meticulous planning, whether for good or not-so-good intentions, and even though ChatGPT does not explicitly offer harmful advice, it is still a tool with evolving boundaries.
Understanding AI is complex, even for those working within the field. After all, it is still a work in progress. Personally, I had an intriguing encounter with ChatGPT, an experience that sparked my curiosity and led me to delve deeper by consulting an expert.
In this article, we will explore the captivating world of AI. I will share my personal encounter, but more importantly, we will address compelling questions you might have, such as: Why does AI sometimes change its answers? Are these inconsistencies considered slips? How biased is AI truly? Can it be used to craft psychological and biometric profiles? Which large language model (LLM) currently leads the pack? And what influence does “Israel” have on AI development?
To shed light on these topics, I interviewed Mr. Jihad Ftouny, an AI instructor dedicated to creating and delivering educational content, including slides, lectures, bootcamps, and certification programs. Mr. Ftouny is also an entrepreneur and a game development enthusiast.
Let us delve into the intricacies of AI and uncover insights that may surprise you.
The Encounter
I once asked ChatGPT whether Unit 8200 operatives work at OpenAI. It gave me a straightforward answer, but when I asked again from a different account and a different device, the response was completely different. Even when using the same account, the answer varied over time.
-
Screengrab from a conversation with ChatGPT through user 1, using a mobile phone. -
Screengrab of a conversation with ChatGPT using user 2 through a Laptop.
So I asked Mr. Ftouny if these discrepancies could be considered “slips” by the AI. Does the system self-correct or revise its outputs on sensitive topics, and if so, how?
According to Mr. Ftouny, the short answer is no; this behavior is by design, not by accident. How so? Large language models like ChatGPT do not retrieve fixed answers, he said. So what do they do? They generate their responses word by word based on probabilities.
-
A graphic explaining how AI generates its answers. (Illustrated by Mahdi Rteil for Al Mayadeen English)
What about answers that change significantly over time?
Mr. Ftouny explains that OpenAI monitors user queries and can manually adjust how the model behaves. If many users, for instance, ask about sensitive topics like Unit 8200, the company might add guardrails.
-
Image explaining guardrails and prompt injections. (Illustrated by Mahdi Rteil to Al Mayadeen English)
As for whether AI “self-controls” on sensitive topics, Mr. Ftouny said that AIs like ChatGPT do not autonomously revise their outputs.
So, how do adjustments happen then? Through Reinforcement Learning from Human Feedback (RLHF).
-
Image explaining Reinforcement Learning and Human Feedback. (Illustrated by Mahdi Rteil for Al Mayadeen English)
AI Trends, Data Profiling & Consent
With trends like the Ghibli AI filters and viral games like “Roast Me” or “Describe Me,” people are sharing personal photos or prompting ChatGPT to describe them, and many receive shockingly accurate responses.
How does AI generate such results with seemingly little personal data? Could this data be used to build psychological or biometric profiles? And more importantly, is it legal or ethical to collect, store, or even sell this information?
Accurate responses, little information
First things first, how does AI produce such accurate responses with minimal input? According to the expert, it is because, for example, in the case of ChatGPT, it was trained on nearly all publicly available internet data: blogs, social media, forums, and conversations. So it has absorbed vast amounts of human behavior, psychology, and language patterns.
Since humans are predictable and AI excels at identifying patterns, if a user gives it a small piece of information, for example, a photo or a brief bio, it does not need a full dossier on the user. It can infer traits based on similarities to the millions of data points it was trained on. So, if someone else with comparable features or behaviors has been discussed online, ChatGPT can draw on that knowledge to generate a response tailored to the user.
AI-made: Psychological or biometric profiles?
Second, regarding the usage of this data, could it possibly be utilized to create a psychological or biometric profile of the user?
“Absolutely,” said Mr. Ftouny. “Companies already have the capability to do this, and it is a legitimate concern. People should be cautious about sharing personal details with AI systems like ChatGPT because that data is not just discarded, it is used to refine the model,” he added.
Elaborating on that point, Mr. Ftouny said that if a person is using a free service, they are not the customer; they are the product. Meaning, that user’s data fuels the system, and data is incredibly valuable, often called “the new oil” because of its worth in training AI and targeting users.
Collecting, storing, selling data: Is it legal or ethical?
Addressing the issue of legality regarding AI companies’ usage of this data, from collecting to storing, Mr. Ftouny believes that it is legal because when a user signs up for OpenAI or any similar service, they agree to terms and conditions that include data collection.
“Most people do not read these lengthy agreements, but by using the service, they consent. Ethically, it is murkier. Companies design these agreements to be complex and tedious, knowing most users will not scrutinize them. And this is not unique to ChatGPT, social media platforms like Facebook, Instagram, and WhatsApp operate the same way,” he explained.
As for selling data, Mr. Ftouny explained that regulations vary. For example, Europe has strict laws, like GDPR, limiting how companies collect, store, and sell personal data. While the US is catching up, its enforcement is still developing.
“Europe leads in data protection, while other regions lag behind. So while it may be legal in many cases, the ethical concerns remain, especially when users do not fully understand what they are agreeing to,” he said.
Built-In Bias & Social Conditioning?
I have noticed that when it comes to emotional topics, relationships, or social norms, ChatGPT tends to take a particular stance, often leaning liberal. For example, it consistently referred to Trump as a “former president” even when that was contextually inaccurate.
What determines ChatGPT’s stance on subjective, non-scientific topics? How is bias managed, and are certain ideological perspectives deliberately embedded or preferred?
Former President Donald Trump: Biased or outdated?
Addressing the question about whether ChatGPT leans liberal, with the example of referring to Trump as the former president, Mr. Ftouny said that the model has a knowledge cutoff, meaning its training data only goes up to April 2023. So, unless internet access is enabled, it does not know anything beyond that date.
In the case of Trump, if he became president again after April 2023, ChatGPT would not know, unless it searches the web. According to Mr. Ftouny, this is a technical limitation and “not necessarily bias.” However, he said, “ChatGPT does lean liberal on many subjective topics.”
As for why, he explained that it comes down to the data. If the training data contains more liberal perspectives, such as ten data points supporting liberal views versus five for conservative ones, the model will probabilistically favor the more common stance.
“Remember, ChatGPT generates answers based on patterns. It does not ‘choose’ a side but reflects what is most frequent in its training data,” he said.
Managing / Mismanaging bias
Mr. Ftouny said that the same reasoning explains how bias is managed, or mismanaged.
For example, if an AI is trained on imbalanced data, such as mostly images of white men for an image-generation model, its outputs will skew that way. Fixing this bias requires actively balancing the dataset, but that is often easier said than done, he added.
Mr. Ftouny further explained that certain ideologies can be deliberately embedded, as developers can influence responses through “system prompts,” which are instructions telling ChatGPT how to behave. For example, they might program it to avoid certain controversial topics or prioritize inclusivity.
“While OpenAI claims neutrality, no AI is truly unbiased. It is shaped by the data it is fed and the rules it is given,” he said.
AI-linked weapons
Earlier this year, an engineer known as “STS 3D” demonstrated a rifle system connected to ChatGPT via OpenAI’s Realtime API. The weapon could receive voice commands and automatically aim and fire.
How does the OpenAI API technically enable this? What other potential use cases or risks does such integration open up for autonomous or semi-autonomous systems?
STS 3D
Mr. Ftouny answered, “When I first saw that video, my reaction was: Why are they doing this? It was interesting, but also scary.”
He then explained how the OpenAI API technically enables it by pointing out that the key thing is that the AI itself does not “understand” it is controlling a real weapon; it is just following instructions by processing voice commands and generating or executing responses without grasping the real-world consequences.
“OpenAI quickly blocked public access to this kind of use after realizing the dangers, implementing guardrails to prevent similar requests. But that does not mean the technology cannot still be used militarily behind closed doors,” he said.
AI voice commands meet drones
Addressing what other potential use cases or risks this opens up for autonomous systems, Mr. Ftouny said, “I am pretty sure militaries are already experimenting with similar tech internally. There are drones, imagine voice-controlled drones that can identify and engage targets based on verbal commands or even autonomously. Some of this is already in development.”
He said that the real danger would be mass deployment, elaborating, “Picture thousands of these drones released into a conflict zone, making lethal decisions based on AI analysis or voice commands. It is not science fiction; it is a real possibility, and a terrifying one. The risk of misuse or errors in target identification could have devastating consequences.”
Memory Between Sessions
How does ChatGPT remember things between user sessions, if at all? And under what conditions does it store or retrieve previous interactions?
Mr. Ftouny said, “Recently, you might have noticed that ChatGPT now has access to all conversations within a single account,” adding, “This is a new feature. Before, it could not do that. But here is the important part: it does not remember things between different users.”
He explained that if someone has a conversation in their OpenAI account, ChatGPT will not remember or use that information when talking to another user on the same platform, adding that the data from chats is used to improve the AI overall. However, it does not recall past conversations across different user sessions.
“I have tested this myself; it really does not work that way,” he said.
Remphasizing that ChatGPT stores or retrieves previous interactions, he said this only happens within the same account. If a person is logged in, it can reference their past chats in that account, but it will not pull information from other people’s conversations, as the memory is strictly limited to the user’s own usage history. It does not transfer between different users, but it is still used to improve the model overall.
Gen Z and beyond: Programmed by AI?
Previous generations turned to search engines like Google to learn. Now, younger users are relying on ChatGPT as their main information source. Could this shift be used to shape or seed ideas over time? Is there a risk of institutionalized misinformation or ideological programming through AI systems?
“Definitely. Here’s how I see it: The internet is like a collective consciousness, all human knowledge and ideas stored in one place. AI gets trained on this massive pool of data, and over time, it starts reshaping and regurgitating that information,” he said.
“Right now, a huge percentage of what we see online, social media posts, marketing campaigns, articles, is already AI-generated,” Mr. Ftouny added, further explaining that this creates a “dangerous possibility”: if companies gain a monopoly over AI, everything their systems generate could influence an entire generation’s thinking.
“Imagine schools using ChatGPT-powered educational tools, marketing software relying on OpenAI’s API, or news platforms integrating AI content. The reach is enormous,” he said.
“And yes, this influence can be deliberate. Remember, AI learns from data. If someone wants to push a specific ideology, they can train the model with biased datasets or tweak its outputs through system prompts. The AI will then naturally lean toward those perspectives. It’s not just about what the AI says, it’s about who controls the data it learns from,” he added.
ChatGPT vs. DeepSeek
How does ChatGPT differ from models like DeepSeek in terms of architecture, capabilities, and performance? Which current AI model is considered the most advanced, and why?
To identify these differences, the expert explained that DeepSeek was actually trained on data generated by ChatGPT, adding that Chinese engineers at DeepSeek found a way to extract data from ChatGPT through clever prompting; they used ChatGPT normally, collected its outputs, and then trained their own AI on that data.
In simple terms, he said, one could say DeepSeek is almost the same as ChatGPT in terms of core capabilities and performance. But there are some key differences:
-
Comparison table between ChatGPT and Deepseek in terms of funding, models, and development pace. (Illustrated by Mahdi Rteil for Al Mayadeen English)
That said, the expert said, DeepSeek is catching up fast. The gap is not huge, he explained, and since DeepSeek learned from ChatGPT’s outputs, it replicates many of its strengths.
But for now, ChatGPT still holds an edge, especially if you pay for the Pro version, Mr. Ftouny concluded.
Could AI go through an 'identity crisis'?
If an AI model is trained on deeply conflicting ideologies, say, capitalism vs. socialism, or Zionism vs. anti-colonialism. Could it experience a kind of “identity crisis” in its outputs? How does it resolve contradictory frameworks during inference?
“First, let’s think about how bias works in AI. If I were to build an AI and wanted to avoid favoring one ideology over another, I might try to balance the training data, say, equal amounts of information on capitalism and socialism,” Mr. Ftouny said.
He explained that in reality, this is about thousands or millions of data points for each, not just 10, adding, “But even then, the AI could still lean toward a particular framework depending on how it’s programmed or fine-tuned.”
He gave an example of asking ChatGPT a lot of questions about “Zionism and Israel,” saying that in this case, one might notice that it does not always give the “full story,” meaning it might sometimes answer questions critical of Zionism, but at other times avoid certain perspectives.
“This is not random, it’s a result of the data it was trained on and the guardrails set by its developers,” he said.
It is about control
Breaking this down, the expert labeled the key factor behind this as “data volume,” explaining that if one ideology is overrepresented in the training set, the AI will naturally reflect the bias in its answers.
Mr. Ftouny said that programmers could try to balance the data; however, they could also “intentionally skew it toward a specific ideology.”
“This is not just about neutrality, it’s about control. Companies like OpenAI can shape the AI’s responses by filtering data or adjusting its reinforcement learning with human feedback (RLHF),” he said.
Identity crisis?
According to Mr. Ftouny, AI does not exactly struggle with contradictions because it does not really experience conflict like a human would. Instead, it generates responses based on probabilities, what is most likely given its training.
So what happens when two ideologies clash in the data? The output would depend on:
- Data distribution → Which side has more examples
- Fine-tuning → Did programmers prioritize certain viewpoints?
- User prompting → How the question is framed can steer the answer
“In short, AI doesn’t ‘resolve’ contradictions, it mirrors them. The output depends on what it has been fed and how it has been constrained. And if a company wants it to lean a certain way? They can make that happen,” he concluded.
Can the student become the teacher?
AI is often described as a work-in-progress, constantly evolving through interaction. Are we, the users, effectively its teachers? And if so, is it possible that the “student” could one day surpass the teachers in influence, power, or independent reasoning?
“Yes, we are effectively teaching it,” Mr. Ftouny said, providing an example: when ChatGPT asks, “Do you prefer this output on the left or the right?”, that’s the user training it. This feedback helps refine its responses.
Narrow AI
According to the expert, right now we are dealing with narrow AI, and these systems are designed for specific tasks.
“ChatGPT generates text. Email filters detect spam. Translation tools convert languages. Voice assistants like Siri or Alexa follow commands. Even though ChatGPT seems versatile, text, images, and code, it is actually multiple narrow-AI models working together under one platform. Each one specializes in a single task; none truly ‘understands’ like a human,” he said.
So, what’s the next stage? General AI.
General AI
General AI would match human-level reasoning across diverse tasks.
“We’re not there yet. But if we reach general AI, the leap to super AI, intelligence far beyond humans, could happen fast, exponentially fast,” Mr. Ftouny said.
Not when, but how
As for the big question: Could AI surpass humans?
Right now, in narrow AI, this is not possible because of its limitations. As for general AI, it’s a maybe. But then we have super AI. And then, according to the expert, yes, it could outthink humans entirely.
“The real question is not if, but when, and how we’ll handle that power,” he said.
Legality and AI: Could AI commit a crime?
AI doesn’t “understand” legality in a human sense. It simply predicts what is likely based on patterns in its training data. So, how do developers prevent AI from generating forged documents or fake IDs? And what is the safeguard if it simply sees those outputs as statistically likely requests?
Mr. Ftouny explained that AI does not understand legality in the human sense; it does not “know” that forging documents or creating fake IDs is illegal. Just like it does not understand that shooting humans with weapons is wrong, it simply predicts responses based on patterns in its training data.
So, how do developers prevent AI from generating fake documents, and what safeguards exist if the AI sees these requests as statistically likely?
The main safeguard is guardrails, restrictions programmed into the AI to block certain outputs. For example, guardrails prevent ChatGPT from generating fake IDs, explicit content, or instructions for illegal activities, like building a bomb. However, as discussed before, these guardrails can sometimes be bypassed through clever prompting.
“That said, I do not think ChatGPT, or similar AI, can currently generate a fully forged document, at least not a convincing one. Creating a fake ID is not just about text; it requires specific formatting, security features, and even paper quality, which AI can’t physically produce,” he said.
Deepfakes
According to the expert, deepfakes are the bigger concern. AI can generate fake videos, voices, and images that mimic real people with alarming accuracy.
One example he provided was a case where a criminal used AI to clone a child’s voice, called the child’s mother, and pretended to be in trouble to extort money. The mother believed it was her real child speaking.
Deepfakes are the real legal and ethical challenge. They can be used for:
- Financial scams: Fake calls impersonating family members
- Political manipulation: Fake videos of politicians saying things they never said
- Fake news: AI-generated “news anchors” spreading misinformation
So while AI may not be forging physical IDs yet, its ability to create convincing audio and video fakes poses serious risks. The legal boundaries around deepfakes are still evolving, but for now, this is where the biggest threats lie, the expert said.
The Gospel
“Israel” has used a military AI system known as Habsora, The Gospel, to assist in target selection during strikes in Gaza. What kind of models underpin this system? Are they rules-based, probabilistic, or trained on real combat data?
According to Mr. Ftouny, “From what we have information publicly available, these systems are probabilistic.” He explained that every person has a profile in their system. “If you have a phone, you have a profile, with hundreds of characteristics, or what we call features,” he said.
These characteristics could include age, gender, movement patterns, where a person usually goes, who they associate with, photos they have taken with certain people, phone data, internet activity, and even audio from their devices. In Gaza, Mr. Ftouny said, they have near-total access to all this information because they control the internet infrastructure there.
Any detail, geolocation, biometrics, medical history, or personal data is fed into the system, and based on that, the AI is trained to make probabilistic assessments about who should be targeted. “Now, let’s say they want to identify a target, for example, me. They have all my features, and the AI assigns me a score, say from 0% to 100%. Maybe my score is 40%, still not a target. Everyone in Gaza has a profile in their system, and Israel runs algorithms to calculate these scores,” he explained.
If someone’s score crosses a certain threshold, let’s say 50% or 60%, they become a target. “For instance, if I have photos with a high-ranking Palestinian official, or if my phone records show calls to a number linked to resistance activities, my score goes up,” he said. “The more of these risk factors I have, the higher my score climbs. Once it passes the threshold, I am flagged. The higher the percentage, the higher the priority as a target.”
Technically, he explained, this AI is running an extremely complex mathematical equation across all these features, geolocation, social connections, communications, and calculating probabilities. That is the core of how it works, he said.
Auditing AI systems for accountability
When Israeli AI systems are involved in strikes that result in civilian casualties, can those models be audited to determine accountability, or are they effectively black boxes shielded from scrutiny?
Mr. Ftouny said, “Technically, the companies behind these systems, and those who fund them, talk a lot about ethics, fairness, and transparency in AI. But in reality, they’re funding Israel and providing these capabilities, so their actions don’t always match their principles.”
He continued, “If we’re being honest, these systems should be auditable, and they should not even exist in the first place, given the ethical frameworks these companies claim to follow. But in practice, they are black boxes.” According to him, the companies know AI should be more transparent to ensure accountability, but that is not how things actually work in situations like this.
AI: The biggest 'people pleaser'
As I mentioned before, at the start of this article, one of the things I find truly fascinating is AI’s role as an enabler, how it can provide reassurance to individuals grappling with mental health challenges. There is always a clear, detectable pattern when using ChatGPT: It takes the user's side and showers them with compliments.
So, does AI give users answers they would favor by design?
"Yes, ChatGPT is designed to give answers that users will like," Mr. Ftouny said, explaining that this happens because of the “system messages”, the instructions mentioned earlier that tell ChatGPT how to behave.
He added, “For example, the system message might say: You are an AI assistant whose job is to give pleasant, reassuring answers. Make the user feel good about themselves, make their questions seem smart, and keep the interaction positive.”
This approach, he explained, is not accidental. “The goal is to make users enjoy the experience so they keep coming back. If ChatGPT makes you feel good when you use it, you are more likely to use it again.”
He added that now, with ChatGPT’s ability to remember all past conversations, it can adapt even more, learning users’ preferences and tailoring responses to keep them engaged. “So yes, it absolutely does this. The AI is literally programmed to be agreeable, supportive, and sometimes even flattering, because that is what keeps people using it,” he said.
Over time, Mr. Ftouny explained, the system fine-tunes its responses based on user feedback and interaction patterns, making the experience feel more personalized and reinforcing the cycle.
The world of AI
Understanding AI is undoubtedly complex, but its complexity only underscores its significance, especially when we consider who controls these systems and how they might be used. As more people fall under the influence of what feels like an AI-driven wave, it raises urgent questions: How soon will superintelligent AI join the chat? And how is today’s narrow AI already reshaping warfare, from military tactics to intelligence operations?
We are no longer exploring the world of AI; it is now exploring and invading our own.