Mystery Mansion* is a spooky, hidden-object adventure game from veteran game developer Cyrus Lum. The game took first place in the Pioneer track of the
2014 Intel® RealSense™ App Challenge, with its innovative, experimental approach drawing the attention of the judges.
As a self-described storyteller, Lum aims to “enhance that suspension of disbelief—to get [the] audience lost in the story and the world I’ve created.” Mystery Mansion is remarkable for its exclusive use of face tracking for the user interface (UI), which results in a highly immersive experience. However, the face-only approach posed a number of development challenges during Lum’s quest to implement intuitive controls and to create a satisfying user experience with a bare-bones UI. In this paper we’ll discuss the challenges encountered and the code Lum used to address them, including how to accurately calibrate the game’s UI for different players, and manage the movement and manipulation of objects in the environment with the intentionally limited control scheme.
The Mystery Mansion*
splash screen, presenting its central, haunted-house theme.
Optimizations and Challenges
Face Tracking
The inspiration for
Mystery Mansion came from Lum’s search for a way to use the capabilities of the
Intel® RealSense™ SDK other than for hand gesture-control. This led to his decision to work on a game that would be controlled exclusively with face tracking.
After searching for a game theme and mechanics that would correspond with the necessarily simplified UI, Lum decided on a first-person, hidden-object game in the style of a point-and-click adventure. Controlling the game with the head alone, as if it were somehow detached from the body, inspired Lum’s idea of a ‘disembodied’ ghost as the game’s central playable character. In Mystery Mansion, the player takes the role of a spirit trapped in the mansion, looking to solve the mystery of its own death.
Nose and Eyes
The game requires that players visually explore the environment, locating and collecting a series of items. For lateral and vertical movement, the
Intel® RealSense™ Camera F200 tracks the movement of the face—specifically the nose—allowing the player to intuitively look around the environment. The directional movement of the player’s face is reproduced by the reticule on-screen.
The red reticule is controlled by the movement of the player’s face, which is tracked by the Intel® RealSense™ camera
.
Lum wanted players to be able to explore the environment in 360 degrees—to turn left and right, and even look behind them. To make this possible, he implemented a system whereby once the player crosses a certain lateral movement threshold, the field of view begins to shift in that direction around the space, with the movement accelerating as the player’s view moves toward the edge of the screen.
To zoom in and out of the environment, the Intel RealSense camera tracks the distance between the player’s eyes. The closer together the eyes are in relation to the camera, the further away the user is–hence the more zoomed out from the environment (with the opposite true for zooming in). The camera zoom is calibrated to ensure that the player doesn’t need to move too close to the screen to effectively zoom in on objects.
02 | void OnFaceData(PXCMFaceData.LandmarksData data) { |
03 | if (!visualizeFace) return ; |
04 | if (colorSize.width == 0 || colorSize.height == 0) return ; |
06 | PXCMFaceData.LandmarkPoint[] points; |
07 | if (!data.QueryPoints(out points)) return ; |
10 | PXCMPointF32 xy=points[data.QueryPointIndex(PXCMFaceData.LandmarkType.LANDMARK_NOSE_TIP)].image; |
13 | PXCMPointF32 xyEyeLeft=points[data.QueryPointIndex(PXCMFaceData.LandmarkType.LANDMARK_EYE_LEFT_CENTER)].image; |
14 | PXCMPointF32 xyEyeRight=points[data.QueryPointIndex(PXCMFaceData.LandmarkType.LANDMARK_EYE_RIGHT_CENTER)].image; |
16 | float tmpEye = Mathf.Abs ((xyEyeLeft.x - xyEyeRight.x) * eyeDistScale); |
17 | if (tmpEye < eyeDistNear) tmpEye = eyeDistNear; |
18 | if (tmpEye > eyeDistFar) tmpEye = eyeDistFar; |
21 | Camera.current.fieldOfView = eyeFOVcenter - tmpEye; |
Code Sample 1: This code uses the distance between the user’s eyes to calculate his or her distance from the screen and then adjusts the field of view accordingly.
Optimizing the UI
Calibration
Lum observed that every player will have a slightly different way of playing the game, as well as different physical attributes in terms of the size, dimensions and configuration of the facial features. This meant that calibrating the face tracking at the start of each play session was key to making the directional controls function correctly for each individual player. Lum inserted a calibration stage at the start of each play session to establish the “zero position” of the player and to ensure that they can be tracked within the range of the camera.
To establish the “zero position” during the calibration stage, the player moves his or her head to position the reticule within a bordered area at the center of the screen. This ensures that the player is within the range of the camera (the tracking volume) when turning his or her head, or moving in and out. The process ensures a consistent experience for every player regardless of differences in face shape, size, and position in relation to the camera.
The calibration stage at the beginning of each play session helps ensure a consistent and accurate experience for each different player.
2 | if (!calibrated && calibratedNose && gameStart && (295 * screenScaleWidth) < targetX && targetX < (345 * screenScaleWidth) && (235 * screenScaleHeight) < targetY && targetY < (285 * screenScaleHeight)) { |
4 | tutorialImg = tutorial1Img; |
5 | LeanTween.alpha(tutorialRect, 1f, 2f) .setEase(LeanTweenType.easeInCirc).setDelay(1.5f); |
6 | LeanTween.alpha(tutorialRect, 0f, 2f) .setEase(LeanTweenType.easeInCirc).setDelay(8f).setOnComplete (showTutorialTurn); |
Code Sample 2: This code calibrates the game by ensuring the target reticule is within the red box show on the calibration screenshot.
Lateral Movement
The full freedom of lateral camera movement is necessary to create the 360-degree field of view that Lum wanted to offer the player. Fundamental to ensuring an optimized user experience with the in-game camera was the implementation of a safety zone and rotational acceleration.
03 | xy.x=(1-(xy.x/colorSize.width)); |
04 | xy.y=(xy.y/colorSize.height); |
07 | newX = (0.5f-xy.x)*(rotateAccelerationScale*Mathf.Abs((0.5f-xy.x)*(0.5f-xy.x))); |
08 | newY = (0.5f-xy.y)*(rotateAccelerationScale*Mathf.Abs((0.5f-xy.y)*(0.5f-xy.y))); |
11 | mainCamBase.transform.Rotate(0, (newX * (lookTimeScale*Time.deltaTime)), 0); |
15 | if (angleY > lookUpMin && angleY < lookUpMax) { |
17 | mainCam.transform.Rotate ((newY * (lookTimeScale * Time.deltaTime)), 0, 0); |
18 | if (angleY < lookUpMin) angleY = lookUpMin; |
19 | if (angleY > lookUpMax) angleY = lookUpMax; |
Code Sample 3: This code controls the rotation and lateral acceleration of the camera as the user turns his or her head further from the center of the screen.
If the player keeps the reticule within a specific zone at the center of the screen—approximately 50 percent of the horizontal volume—the camera does not begin to rotate, which ensures that the player can explore that area without the camera moving unintentionally. Once the reticule is moved outside that zone, the lateral movement begins in the direction the player is looking, and accelerates as he or she moves the reticule toward the edge of the screen. This gives the player accurate and intuitive 360-degree control.
Vertical Movement
Lum’s experiments showed that complete freedom of camera-movement was less practical on the vertical axis, because if both axes are in play, the player can become disoriented. Furthermore, little is to be gained by allowing players to look at the ceiling and floor with the interactive elements of the game occupying the lateral band of vision. However, players needed some vertical movement in order to inspect elements near the floor or on raised surfaces. To facilitate this, Lum allowed 30 degrees of movement in both up and down directions, a compromise that lets the player look around without becoming disoriented.
Gameplay Optimizations
After exploring a variety of gameplay mechanics, Lum decided to add simple puzzle solving, in addition to the core, object-collection gameplay. Puzzle solving fit with the overall game design and could be effectively and intuitively implemented using only the face-tracking UI.
The player picks up items in the game by moving the reticule over them, like a mouse cursor hovering over an object. At the rollover point, the object moves out of the environment toward the player—as if being levitated supernaturally—and enters the on-screen item inventory.
Here, the player has selected an object in the environment by moving his or her face to position the reticule over it, causing the object—in this case a pillbox—to move toward the player and into the inventory.
1 | Ray ray = Camera.current.ScreenPointToRay( new Vector3(( float )targetX, ( float )Screen.height-( float )targetY, ( float )0.1f)); |
3 | if (Physics.Raycast(ray, out hit,100 )) { |
4 | if (hit.collider.name.Contains( "MysItem" )){ |
5 | hit.collider.transform.Rotate( new Vector3(0,-3,0)); |
Code Sample 4: This is the code that lets the player pick up objects using the face-tracking UI.
Lum also needed the player to move from room to room in a way that would fit with the simplified UI and the logic of the gameplay. To this end, each room has an exit door that is only activated once the player has collected all the available objects. At this point, the player moves his or her face to position the reticule over the door, causing the door to open and moving the game scene to the next space.
To add variety to the gameplay, Lum explored and added different puzzle mechanics that could be manipulated in similarly simple and logical ways. One of these is the block puzzle, where the player uses the directional control to move blocks within a frame. Lum implemented this action in multiple ways, including moving blocks within a picture frame to reconstruct an image and moving pipes on a door in order to unlock it.
1 | else if (hit.collider.name.Contains( "puzzleGame" )){ |
2 | room currentRoom = (room)rooms[roomID]; |
3 | puzzleItem tmpPuzzleItem = (puzzleItem)currentRoom.puzzleItems[hit.collider.name]; |
4 | if (!LeanTween.isTweening(hit.collider.gameObject) && !tmpPuzzleItem.solved){ |
5 | if (hit.collider.name.Contains( "_Rot" )){ |
6 | LeanTween.rotateAroundLocal ( hit.collider.gameObject, Vector3.up, 90f, 2f ).setEase (LeanTweenType.easeInOutCubic).setOnCompleteParam (hit.collider.gameObject).setOnComplete (puzzleAnimFinished); |
Code Sample 5: This code allows players to move blocks using face-tracking in order to complete some of the game’s puzzles.
The puzzles include this block picture. The player uses directional control to move pieces and to reconstruct the image resulting in finding a clue.
In this puzzle, the player uses the face-tracked directional control to move the segments of piping.
Text prompts also appear on the screen to help the player determine the next step, for example, when he or she needs to exit the room.
Testing and Analysis
Mystery Mansion was developed in its entirety by Lum, who also conducted the majority of the testing. During development, however, he called on the services of three friends to test the game.
In the world of video games, no two players play a game exactly the same way—what seems intuitive to one player will not necessarily be so for another. This difference quickly became evident to Lum during the external testing, particularly with the face tracking. Testers had difficulty advancing through the game because they would position themselves differently, or simply because of differences in facial size and composition. Lum’s observations led to the implementation of the calibration stage prior to play and the addition of a tutorial at the beginning of the game to ensure that the basics of the UI are well understood.
The Tutorial stage helps players understand how to interact with the game using only the face-tracking interface.
Key Lessons
For Lum, when working with Intel RealSense technology and human interface technology, simplicity is absolutely key; a point that was driven home through his work on Mystery Mansion, where the UI is limited exclusively to face tracking. He’s a firm believer in the importance of not trying to do too much in terms of adding mechanics and features—even if it initially seems cool. Moving through the environment and manipulating objects using only face-tracking required careful iteration of the stripped-down UI, and a degree of tutorial “hand-holding” to ensure that the player was never in a position of not knowing what to do or how to advance through the game.
Testing played a key role in the development of Mystery Mansion. Lum found that developers should not assume that what works for one player will automatically be true for another. Every player will behave differently with the game, and in terms of the human interface of the Intel RealSense camera, each player’s face and hands will have different size, shape, movement and positional attributes which must be compensated for in the code.
The resources provided with the Intel RealSense SDK's
Unity Toolkit ensured a straightforward development environment for Lum. Unity* is a user-friendly development environment that has well-tested and thorough compatibility with the Intel RealSense SDK, a wealth of resources (including those provided with the Intel RealSense SDK), a strong support community, and a ready stock of graphical assets from the Asset Store.
Lum believes that developers should always consider the physical impact of prolonged play times with hand gesture controls, which can sometimes lead to limb fatigue if the UI is not thoughtfully balanced for the player.
Tools and Resources
Lum found the process of developing his game using the
Intel RealSense SDK straightforward. He also dedicated time to reviewing the available demos to pick up practical tips, including the Nine Cubes sample provided with the Intel RealSense SDK.
Unity
Lum chose to develop the game using
Unity, which is readily compatible with the Intel RealSense SDK and offers a complete development environment. While Lum is an accomplished programmer in C#, the Unity platform made much of the basic programming required unnecessary, allowing him to iterate quickly in terms of developing and testing prototypes.
MonoDevelop*
To develop the C# game scripts, Lum used
MonoDevelop, the integrated development environment supplied with Unity. Within MonoDevelop, Lum placed objects, set up properties, added behaviors and logic, and wrote scripts for the integration of the Intel RealSense camera data.
Nine Cubes
One of the fundamental building blocks for building
Mystery Mansion was the
Nine Cubes sample, which is a Unity software sample provided with the Intel RealSense SDK (it can be found in the frameworks folder of the samples directory in the SDK). This demo allows the user to move a cube using face tracking—specifically nose tracking. This functionality became the foundation of
Mystery Mansion’s UI.
Unity Technologies Asset Store
Having already had experience with the Unity Technologies Asset Store for a previous project, it was Lum’s go-to place for the graphic elements of Mystery Mansion, essentially saving time and making it possible to singlehandedly develop a visually rich game. Serendipitously, he was looking for assets during the Halloween period, so creepy visuals were easy to find.
What’s Next for Mystery Mansion
Since submitting Mystery Mansion for the Intel RealSense App Challenge, Lum has continued to experiment with features that help create an even more immersive experience. For example, recent iterations allow the player to look inside boxes or containers by slowly leaning in to peer inside. This action eventually triggers something to pop out, creating a real moment of visceral fright. Lum’s takeaway is that the more pantomiming of real physical actions in the game, the greater the immersion in the experience and the more opportunities to emotionally involve and engage the player.
To date, Mystery Mansion has been designed principally for laptops and desktop PCs equipped with Intel RealSense user-facing cameras. Lum has already conducted tests with the Google Tango* tablet and is eager to work on tablet and mobile platforms with Intel RealSense technology, particularly in the light of the ongoing collaboration between Intel and Google to bring Project Tango to mobile phone devices with Intel RealSense technology.
Intel RealSense SDK: Looking F
No comments:
Post a Comment