Month: October 2024

  • Musk Claims Brain-Computer Interface to Tackle Most Diseases, Cost Comparable to Smartphones After Mass Production

    Musk Claims Brain-Computer Interface to Tackle Most Diseases, Cost Comparable to Smartphones After Mass Production

    Elon Musk boldly declared at the 2024 Congress of Neurological Surgeons (CNS 2024) that Neuralink has the potential to solve the majority of diseases or brain issues. He likened the brain to a circuit board with shorts or missing connections that can be repaired.

    Echoing his real-life “Iron Man” persona, Musk aims to bring blessings to the disabled, prioritizing this mission. Neuralink’s main goal is to establish a brain-computer interface by implanting chips and electrodes into the human brain, enabling direct control of external devices through brain bioelectrical signals for patients with visual or motor impairments.

    This technology allows machines to read electromagnetic signals from neural activity to capture the brain’s intentions, controlling external devices like phones, computers, and robotic arms. Conversely, machines can input information into the brain by electrically stimulating specific neuron clusters, converting images and sounds into neural signals for visual and auditory experiences.

    Neuralink’s first product, named “Telepathy,” enables users to control their phones or computers with their minds, and subsequently, almost any other device. In January 2024, Neuralink conducted its first human trial with Noland Arbaugh, a quadriplegic since a 2016 diving accident. After implantation with the “N1” device, Arbaugh recovered well and could perform daily activities like watching videos, reading, and playing video games using the interface. However, issues arose with wire retraction, prompting Neuralink engineers to improve the device’s performance by enhancing the bits per second through algorithm modifications.

    The second participant, Alex, a former automotive technician also paralyzed due to spinal cord injury, received the “Link” implant. Neuralink optimized the surgical process to avoid wire retraction issues seen in Arbaugh. Impressively, Alex learned to move a cursor with his mind within five minutes of connecting the interface to a computer. Alex now uses the interface to operate CAD software, design a 3D-printed charging stand for his implant, and play FPS games like “CS2.”

    Neuralink is expanding control options for digital devices, including decoding multiple clicks and simultaneous movements for full mouse and video game controller functionality. They are also developing algorithms to recognize handwriting intentions for faster text input by disabled individuals. Future plans include enabling Link to interact with the physical world, allowing users to independently eat and move using robotic arms or wheelchairs.

    Neuralink’s next-generation product, “Blindsight,” aims to restore vision to those who have lost their eyes and optic nerves, even potentially allowing congenitally blind individuals to see the world for the first time. Indian industrialist Anand Mahindra remarked that if the device meets expectations, it would be Musk’s “most enduring gift to humanity, far surpassing Tesla or SpaceX.”

    Musk also announced the “600-Second Circuit” plan, akin to laser eye surgery, which takes only 10 minutes to complete. He is confident in Neuralink’s future low-cost mass production, aiming for a price range of 5,000to10,000 initially, potentially dropping to smartphone-like prices of 1,000to2,000 with further production. After all, Musk is the “Iron Man” who specializes in turning the impossible into the eventual.

  • Google Postpones Launch of Next-Generation AI Agents Until at Least 2025

    Google Postpones Launch of Next-Generation AI Agents Until at Least 2025

    Google has announced that its ambitious Project Astra, aimed at developing AI applications and “agents” for real-time, multimodal understanding, will not be available until at least 2025.

    This timeline was revealed by Google CEO Sundar Pichai during the company’s Q3 earnings call on Tuesday. Pichai stated, “We are building experiences where AI can see and reason about the world around you. Project Astra offers a glimpse of that future, and we are striving to deliver such experiences as soon as 2025.”

    Project Astra, which Google showcased at its I/O developer conference in May 2024, encompasses a diverse array of technologies. These range from smartphone apps that can recognize their surroundings and answer pertinent questions to AI assistants capable of performing tasks on a user’s behalf.

    During the I/O conference, Google demonstrated a Project Astra prototype that could answer questions about objects within a smartphone camera’s view, such as identifying a user’s neighborhood or naming a part on a broken bicycle.

    Earlier this month, The Information reported that Google was planning to launch a consumer-focused agent experience as early as December, capable of tasks like purchasing products, booking flights, and other chores. However, it now appears unlikely that this will happen unless the experience in question is unrelated to Project Astra.

    Meanwhile, Anthropic has recently emerged as one of the first companies with a large generative AI model capable of controlling apps and web browsers on a PC. However, this underscores the challenges involved in building AI agents, as Anthropic has struggled with many basic tasks.

  • OpenAI reportedly planning to build its first AI chip in 2026

    OpenAI reportedly planning to build its first AI chip in 2026

    OpenAI is reportedly set to develop its inaugural AI chip by 2026, in collaboration with TSMC and Broadcom. Additionally, the company has initiated the use of AMD chips alongside Nvidia’s for AI training purposes.

    According to Reuters, OpenAI has temporarily halted its plans to establish a network of chip manufacturing factories. Instead, it will now concentrate on designing its own chips internally. Reuters further reports that OpenAI has been working closely with Broadcom for several months to create an AI chip specifically designed for running models, with an expected release date as early as 2026.

    In parallel, OpenAI intends to leverage AMD chips through Microsoft’s Azure cloud platform for model training. Previously, the company had relied heavily on Nvidia GPUs for this purpose. However, due to chip shortages, delays, and the increasing cost of training, OpenAI has been prompted to explore alternative options, as stated by Reuters.