Tools of the trade

The last month of game development has been reasonably productive even if it hasn’t been bombastic.

Scenes and Dialogue

The core of The Day After is scenes and dialogue. Your game is broken up into a chain of scenes, which might be a discussion between characters, a fight scene or an action sequence. Our eye is on the prize of having a Minimal Viable Product (MVP) sometime this year. This MVP is just going to be different scenes of dialogue. Maybe branching dialogue if everything goes swimmingly.

The first cut of this is having a small library of scenes we can play and stitch together. Since AI is not a part of this MVP, the AI Director won’t be doing anything more intelligent than randomly selecting scenes. In fact, the levels of achievement look like this:

  1. Randomly select scenes from a library.
  2. Randomly select scenes from a library, without repetition.
  3. Select scenes with that go through a predefined start scene and end scene.
  4. Select scenes with a start and an end, satisfying simple mood prerequisites.
  5. Select scenes a la #4, but match the shape of a dramatic arc (exposition, rising action, climax, falling action, and dénouement)
  6. Choose a plot structure (eg, traditional, non-linear, or Rashomon-style) and build narrative in that structure.

The MVP will be this first level, or maybe the second if we’re doing well.

Over its history, scenes have involved a lot of the research development in The Day After. How do they work? What data do they need? How do players and characters interact with them? Scenes have gone through a few R&D iterations, but to get this MVP we had to clear that up straight away.

Luckily eminent interactive fiction theorist Emily Short had written a very nice article: “Beyond Branching: Quality-Based, Salience-Based, and Waypoint Narrative¬†Structures“. This was great food for thought (as did Sam Kabo Ashwell’s “Standard Patterns in Choice-Based Games” but that was a bit more structural).

The scene mechanics of The Day After closely resemble salience-based narrative, but with the AI Director calling the shots. There’s work required in providing enough data for a scene, but that’s not intellectually tricky.

I hadn’t really thought too much about dialogue before. I had expected dialogue to be fixed to the scene, but that rigidity was uncomfortable. I had thought about a simple scripting system which might look like a template:

(if player.class == 'Hacker')
    player.say("YOLO!")
(else)
    player.say("Well here goes nothing...")

This is complicated – character-specific speech seemed very verbose to specify (it takes 4 lines to just add custom text for The Hacker!) That system never felt comfortable. However, Emily’s waypoint narrative system seems to be a good fit – flexible enough for interesting conversations, but not particularly onerous to use.

The way the waypoint dialogue system works is that a conversation is described not as dialogue lines, but in milestones or topics. Think of it as a node in a network. All the dialogue options are edges coming off that. If you take a dialogue option, that takes you to another milestone/topic. If you’re in discussions with an AI player, it might try to lead you through the network to a place it wants to go. The conversation options you take might help or hinder that.

Interestingly, a conversation might return to the same topic. You might not want the same dialogue choice to produce the exact same dialogue. To avoid this, two nodes might have multiple edges between them and ways to select them (eg, current mood, or if you haven’t tried this dialogue choice before). This is not a new system – not only because of Emily’s writeup, but there were similar dialogue systems in TADS 3 years ago.

I began experimenting with these ideas with scenes being big JSON blobs of data, and dialogue being a mini network encapsulated inside a scene. Writing JSON to a flexible schema is really tedious, so May 2016’s game dev work was dedicated to creating a tool to at least write scene data easily and visually.

Tools

“The Day After Dialogue Crafter” is a little webapp with a simple Python backend. Running in a browser means it’s easy for me to iterate and not have to worry about the nuts and bolts too much. It’s easy to add an image to a web page, more than it is to do so in a GUI like Qt, or even UE4. I’ve used Bootstrap to do layout and UI. vis.js lets me create networks very easily. I’ve brought in a few Javascript libraries to give me more UI flexibility. All the actual data work is done by AJAX requests back to a Python program (using Flask and Flask-RESTful).

Current Dialogue Crafter tool
Current Dialogue Crafter tool

This tool isn’t finished yet, but let’s me quickly create, change and maintain scenes. I can create dialogue networks… but no dialogue yet. I’m building it all up piece by piece, but I’m producing results at a good rate. It’s not a beautiful program but is better than eyeballing text files. And hey, I’m learning web technologies I should have learned 5 years ago.

VR dev

Apart from this work, I recently acquired a HTC Vive virtual reality kit. I remember playing VR games at an arcade about 20 years ago. You had to stand on a little ringed podium while in VR you were on a cart that dragged you through arenas where you shot bad robots. It was expensive ($5 a go?) and I remember it being cool.

From that perspective, little has changed. The HTC Vive was quite expensive (roughly $1300 AUD) but it’s an amazing experience. Especially when paired with a good graphics card.

I had a 3D TV for five years and never once used the 3D aspect. 3D movies are a bit of a gimmick. But VR (especially in the Vive format) is fantastic. It’s a great moment when you see your controller move as fast and as accurately as your hand, can walk around a space, or you’re able to do physical tricks that you can’t do with a gamepad.

For example, in Holopoint you’re in the equivalent of the Matrix training program for archery. Drawing, nocking and shooting arrows feels reminiscent of the Wii. But when the targets shoot back at your head, you need to duck and weave. Maybe you flick your head to the side. Maybe you arch your back and dodge it like Neo. Maybe you engage your entire body and just duck. Doing this feels natural and instinctive, and the first time you try it, you feel like you’ve tricked the game. In the well-constructed VR experiences, they actually trick you into doing this.

Seeing Youtube videos or reading reviews really doesn’t convey the physicality of it all. You really need to see it in person. It’s like someone telling you about a great concert without you being there.

The only time I felt uncomfortable or nauseous was when I was using my older graphics card on demanding games. Any stutters lagged my vestibular system, which was about as bad as being on a boat for a day and trying to lose your sea-legs. When I upgraded my GPU, there was no lag whatsoever and everything felt right.

Previous attempts at VR were too little progress for its great hype. This current generation is finally getting there. In a film sense, we’ve advanced beyond the zoetrope and are getting some nifty kinetoscopes. I don’t think it’s worth investing too much in yet, unless you like to be at the bleeding edge. But definitely try one if you can. If they can make them wireless, have better screens and be powered by fancier GPUs, that’d be killer even if these aren’t show-stopping problems at the moment.

I’d definitely recommend the Vive over the Oculus. The latter seems to be playing catch-up with the former, and Oculus doesn’t seem to be performing in the best interests of the customers nor the medium. I’ve also tried Google Cardboard, which is an ultra-cheap way to get into VR. Samsung VR doesn’t seem worth it. I’ll be interested to see how the Sony VR goes, given the strong impact of the GPU on VR experiences.

I’ve been experimenting on writing a short game in UE4 using VR. Doing VR in UE4 is pretty much effortless from a tech perspective. Just plug in and select VR from a menu. Creating 3D models is very weird because what looks like the right scale in first-person shooter mode looks like a world made for shorties in VR. As of today, UE4 (ver 4.12) has some amazing dev tools for VR – like providing creator tools inside of VR so you can make a game without having to take the goggles on and off.

There’s a bunch of lessons to be learned (and created!) in VR, so anyone wanting to create VR games should know that there is a lot of work awaiting them. I continue to play with VR game dev, just as a palate cleanser from the fairly traditional style of The Day After.

Next up

More tool development! I was hoping to have the tools at a better state mid-May but didn’t get it quite there for a monthly update. Hopefully I can provide another update in the coming weeks.