Warith Niallah provides insights on AI friends and friend bots

There is a new trend that someone can have friends, companions, companions, and you share your deepest secrets, build relationships, and have casual conversations entirely driven by artificial intelligence (AI). I can.

Users can install the app on their mobile device or get a real phone number and have a text conversation with a bot that emulates a friend. These AI systems can also start conversations just like human friends.

Interview with Wallis Nya (Instagram: @WarithNiallah), Multinational media and technology company FTC Publications, Inc. Chief Executive Officer. Gain insights and ideas about this technology. Warith is known to have an affinity for AI, but misuse of technology and data can affect some people and raise questions about mental health concerns. There is a consistent warning that there is.

As frankly as possible, the interview proceeded in this way.

Q: Thank you for joining Niallah.

A: My joy, call me Wallis.

Q: You are involved in the initial development of AI. Please tell us a little about it.

A: In the early 90’s, some Digital Equipment Corporation (DEC) minicomputers were used to monitor and predict data failures at telemarketing centers, such as DECTalk, a voice synthesizer that reports data from AT & T. I experimented with the system. At that time, AI was really in its infancy.

Q: Today, the mobile store has apps that let you make friends, buy clothes, and have real conversations …

A: Yeah, you know. That might be fun, but there are some issues I’m having.

Q: Please explain in detail.

A: Social media is the center of life for many young adults and children, and there are some cases where people want to adapt. Pressure from peers at the astronomical level. You may find that a social circle is created and the user is isolated. The possibility of relying on AI friends will tempt someone.

Q: Do isolated people rely on AI friends and AI chatbots?

A: Probably. You may even see ads in these apps for your day, sharing your secrets and seeking dating. Data collection is enormous, and AI that behaves like a real person can allow someone to seek advice. Remember that these are machines, not humans. It can be argued that computers may be more moral than humans, but there is no moral compass, but what is the definition or recognition? Imagine a machine that makes errors or miscalculations in the information it collects. What if the AI ​​makes my error or intentionally gives bad advice or bad suggestions?

Q: Maybe like a human friend telling someone to do something wrong?

A: Absolutely or worse, we offer some solutions – probably illegal. Who is responsible? We hear about the mental health situation in which people hear their voices. What about situations where they read texts or recognize AI statements as a call for action?

Q: It’s convincing. What can you do about this?

A: Not all solutions are available in one size, but you can take precautions to avoid the potential pitfalls of AI misuse and misunderstanding.

  1. Monitor what your child installs on their mobile device and what they do.
  2. Have a conversation and find out who your child is sending text messages to.
  3. If you use these systems for entertainment yourself, do not share any real information or secrets.
  4. Reach out to family members and support groups and accept human contact.
  5. Seek advice from experts, not machines or texts from AI or random people.
  6. be yourself. Enjoy who you are and thank you for what you offer the world.

At the end of the conversation, another question was asked to Wallis.

Q: Is AI good for us? Is it bad for us? Do you have any opinions?

A: AI is good for us. As with water, too much water can be toxic, known as water intoxication, and can lead to serious problems, including death. You can drown in it and water can be abused during the interrogation. Think the same for AI. Misuse or misuse can lead to disasters. Warith Niallah provides insights on AI friends and friend bots

Back to top button