Skip to main content

ZDoom: Doom battlefield for smart bot

The researchers of the project ViZDoom throw the challenge: to build a controller (in C ++, Python or Java) for the historic FPS Doom able to address other bots using machine learning algorithms in the course of the Visual AI Doom deathmatch Competition tournament. VizDoom is an artificial intelligence research platform based on Doom to deepen the techniques of visual learning and the so-called "reinforcement learning" used to "educate" the machine allowing it to independently develop their own behavioral strategy, determined by a succession of trial and mistakes.

Use bots to play Doom may not seem like a novelty, bots Doom controlled by the PC have faced for years human gamers. The challenge, however, lies within the limits placed by the researchers: the controller can only access the information in the video buffer, while it will be denied access to any other information, such as those relating to the map, the arms and the position of the other players .

The PC will therefore be made more "human" because the only input will be "visual" type, that is attributable to the items on the screen. They will change, accordingly, also the way in which the PC will control the bots and it is here that come into the field of deep reinforcement learning techniques that researchers invite to use to develop the controller. Without going too far with the memory, you may recall that in another recent challenge man-machine which has been much discussed, that between AlphaGo and the Go champion Lee Se-dol, were employed similar techniques.

Logging on to the official website you can VizDom to watch some tests using the visual reinforcement learning technique. In the first case the bot fires on the enemy responding solely based visual input (read: scanning only the video buffer data) ...




Developers interested in participating can present the material requested by 31 May. The Visual AI Doom Competition heats up in August and September. The final deadmatch will take place on the occasion of the 2016 Computational Intelligence and Games Conference to be held in Greece in September. In the first round will face the bots using a single weapon (rocket launcher) in a note map, the second is the weapon, and the map will not be revealed. Which occurs in deathmatch involving humans will win the bots that totaled the highest number of killings.

What may seem like a futile exercise in style or, at worst, a project that will have to explain its effects in the field of gaming, by contrast, is engaged in a much broader context and, ultimately, very profitable for companies that have chosen to invest in the sector of the "Natural Intelligence" (intelligent digital assistants, bots, etc.). At the recent Microsoft Forum 2016, held in Milan, for example, the company, committed to developing its smart bot ecosystem, estimated that by 2020 the estimated value of this market could amount to $ 5 billion.

Comments

Popular posts from this blog

Apple’s AI Push: Everything We Know About Apple Intelligence So Far

Apple’s WWDC 2025 confirmed what many suspected: Apple is finally making a serious leap into artificial intelligence. Dubbed “Apple Intelligence,” the suite of AI-powered tools, enhancements, and integrations marks the company’s biggest software evolution in a decade. But unlike competitors racing to plug AI into everything, Apple is taking a slower, more deliberate approach — one rooted in privacy, on-device processing, and ecosystem synergy. If you’re wondering what Apple Intelligence actually is, how it works, and what it means for your iPhone, iPad, or Mac, you’re in the right place. This article breaks it all down.   What Is Apple Intelligence? Let’s get the terminology clear first. Apple Intelligence isn’t a product — it’s a platform. It’s not just a chatbot. It’s a system-wide integration of generative AI, machine learning, and personal context awareness, embedded across Apple’s OS platforms. Think of it as a foundational AI layer stitched into iOS 18, iPadOS 18, and m...

The Silent Revolution of On-Device AI: Why the Cloud Is No Longer King

Introduction For years, artificial intelligence has meant one thing: the cloud. Whether you’re asking ChatGPT a question, editing a photo with AI tools, or getting recommendations on Netflix — those decisions happen on distant servers, not your device. But that’s changing. Thanks to major advances in silicon, model compression, and memory architecture, AI is quietly migrating from giant data centres to the palm of your hand. Your phone, your laptop, your smartwatch — all are becoming AI engines in their own right. It’s a shift that redefines not just how AI works, but who controls it, how private it is, and what it can do for you. This article explores the rise of on-device AI — how it works, why it matters, and why the cloud’s days as the centre of the AI universe might be numbered. What Is On-Device AI? On-device AI refers to machine learning models that run locally on your smartphone, tablet, laptop, or edge device — without needing constant access to the cloud. In practi...

Max Q: Psyche(d)

In this issue: SpaceX launches NASA asteroid mission, news from Relativity Space and more. © 2023 TechCrunch. All rights reserved. For personal use only. from TechCrunch https://ift.tt/h6Kjrde via IFTTT