Hellblade: Senua’s Sacrifice Trailer Continues To Avoid Showing Gameplay
Hellblade
(Last Updated On: July 29, 2017)

Ninja Theory released a new official trailer for Hellblade: Senua’s Sacrifice for the PlayStation 4 and PC. The trailer features two minutes of footage from the upcoming title, as fans continue to beg to see what the actual gameplay is like.

It’s an impressive looking game at times. Some of the cinematics look pretty good, and the way that Ninja Theory has used the Unreal Engine for the shaders and lighting is unmistakably striking, but the trailer still leaves gamers completely in the dark as to what the actual gameplay will be. You can check it out below.

One might be convinced that Ninja Theory was trying to sell gamers a partially interactive movie for $29.99.

Just as they begin to show some of the combat it all turns into slow motion and we still get the whole vignette effect that keeps everything in cinematic mode.

Fanboys and defenders of Ninja Theory continue to rally around the game, saying that those complaining about the lack of gameplay can go sift through the many developer diaries to spot out the handful of seconds of combat or walking around, but if the average gamer is still perplexed about what the actual gameplay is like in a game like Hellblade, I think there’s a communication error on the side of the developers or marketing team in terms of conveying what the game actually is.

Constantly telling people it’s an emotional “experience” isn’t going to be enough to convince a lot of gamers to part ways with $30, especially when other higher-profile games like Uncharted and Destiny 2 are right around the corner.

Ninja Theory has reiterated often enough that the game is only half the price of a standard AAA title with half the amount of content. So expect anywhere between four to six hours worth of gameplay… assuming they’re using the eight hour standard as a reference point.

You can look for Hellblade: Senua’s Sacrifice to launch on Steam and PS4 starting August 8th.


Ads (learn more about our advertising policies here)



About

Billy has been rustling Jimmies for years covering video games, technology and digital trends within the electronics entertainment space. The GJP cried and their tears became his milkshake. Need to get in touch? Try the Contact Page.

  • Phasmatis75

    “It’s an emotional experience.”

    No it’s insanity, mental illness. Thus none of it matters as her mind continues to degenerate. There will probably be a surprise ending where none of it happened and it all took place in your head as your family takes care of you as you decline further and further.

    Going full mental illness is arguably what killed any hype I had for this game. What few seconds have been shown looked meh as hell.

  • tajlund

    RPS is going nuts over this. Without a bit of substantial gameplay, they can’t get over how amazing it is. Muh Diversity! Deep, insightful look at mental illness!

    Yeah, because that’s what I play games for. Sorry, I have a psych background and issues of my own, I get enough mental illness in my life as it is.

    • Phasmatis75

      Calling it now, the game is going to bankrupt the developer after it bombs sales wise.

      • I don’t know… they’re using this as a launchpad for their middleware for the Unreal Engine 4. They apparently already managed a few deals with some movie projects, so I’m guessing even if the game fails (and it likely will) they’ll use it as a demonstration for their tech.

        In all honesty, this is actually a HUGE evolutionary step in performance capture, and it could cut costs drastically for both games and movies when it comes to the production pipeline. I don’t know what their licensing fees are like for this setup yet, but I would like to see this injector used more often, it could make for some very interesting gaming possibilities, as well as much better 1:1 performance capture for movies and cinematics.

        • Phasmatis75

          I’ve seen a few tech companies like that fold over the years. Though that bit is news to me about the developer. Where did you hear that from? I’m not doubting you I just want to expand my sources.

          I doubt that this will get off the ground. The same was said about the tech behind L.A. Nior when it came out. Years later we found out that the reason the tech despite being brilliant isn’t used is because it takes up too much disk space. That it is costly as it is cumbersome to implement. Now I haven’t been following their tech, but I imagine it will run into some of the same issues.

          • I’ve seen a few tech companies like that fold over the years. Though that bit is news to me about the developer. Where did you hear that from? I’m not doubting you I just want to expand my sources.

            Tameem Antoniades talked briefly about it with Game Informer. Didn’t say which studios, though.

            https://www.youtube.com/watch?v=qddLvIzxhPs

            Now I haven’t been following their tech, but I imagine it will run into some of the same issues.

            A lot of people did think that about MotionScan — that it would be the future. And you’re right that it was bulky, expensive, and completely inconvenient (mainly because you had to do facial capture separate from the motion capture and that meant doubling up on both production schedules and costs).

            To be fair, Ninja Theory’s injector is the complete opposite. A cost-effective measure that removes weeks of clean-up and technical adjustments.

            Essentially they’ve worked with 3Lateral and Cubic Motion to create an injector for Unreal Engine 4 to capture high fidelity performance capture both for bodies and faces and have it render in real-time. This allowed them to blend cinematics and gameplay in Hellblade in one seamless real-time loop, which is quite impressive.

            Even more than that, the actual capture itself takes place in real-time within the Unreal Engine 4. So there’s literally the option to completely cut out the intermediate phase of re-rigging, cleaning up, modifying or dumping the data after a mo-cap session and then going through all the data later on in a separate studio with the animation artists. You can get a look at it in action with the video below.

            https://www.youtube.com/watch?v=JbQSpfWUs4I

            In a way, this would drastically speed up cinematic and production capture for games like The Last of Us, or Uncharted, or movies heavily reliant on human performance capture for CGI sequences.

            You could technically film all the scenes and then play it back in real-time to see how it would look in a final render without any additional software tools. Right there from the runtime state of the Unreal Engine 4.

            The injector is literally how they were able to make Hellblade with just under two dozen people, since it cut out a ton of requirements on the pre-production and cinematic production end. They did all the filming in-studio instead of outsourcing.

            Hellblade doesn’t look like a fun game, but their middleware could provide a HUGE opportunity for indie devs to start making some really high-quality games on a really low budget.

          • Phasmatis75

            As a writer let me first say that I love the passion and quality of that post. I on occasion geek out at well formed presentations of ideas like you just delivered. I’m not being sarcastic either.

            Now that aside while I agree that the technology does look like it will solve a bunch of issues with development it also makes the faces look bad. There is a distinct uncanny valley effect going on in the video that is off putting that at first I didn’t notice and probably wouldn’t have until a time where I sat down with the game.

            Knowing to look at the faces to see the result and they look like a poor substitution for the prior process. The eyes look sunken in, the lips mentally retarded, the facial expression done by obviously ameturistic talent.

            There in lies a huge problem with the technology. Uncanny valley can be fixed with editing at lower costs because of the time saved with the technology, but the money saved can be quickly eaten up in having to pay talent capable of emoting properly. If you have to mocap 30 different roles and you need to hire 30 different high grade talents that’s going to get expensive after awhile.

            From an investors standpoint, it looks like a technology I’d let others risk their money on before adopting it myself. If the process requires more cleaning up and editing to make it look just right then it wouldn’t be worth the investment over the former method.

            My main concern is how easy is it to over come uncanny valley and the few issues with her face. Let’s be honest she’s supposed to be mentally ill, so I wouldn’t think anything of her face looking as such, but is that the standard for all characters?

            Either way I see it being a good supplementary technology, but it’s not like this is the first time we’ve seen this technology before.

          • There is a distinct uncanny valley effect going on in the video that is off putting that at first I didn’t notice and probably wouldn’t have until a time where I sat down with the game.

            You’re not the first to make this criticism about it. I don’t know if it’s the way they scanned her face in or the way 3Lateral rigged it, but there have been constant and frequent comments about the uncanny valley.

            However, that’s more of a presentation issue than a technical issue. A lot of it also boils down to how a character is lit and what sort of shaders they use to coincide with the character renderer. Subsurface scattering and multi-directional light sourcing can work HUGE wonders with the uncanny valley effect, which I oftentimes notice is more of a lighting issue than a mesh/texture problem.

            The uncanny valley effect is still in full swing in this older 3Lateral video.

            https://www.youtube.com/watch?v=aak1Yn5_5hA

            . If you have to mocap 30 different roles and you need to hire 30 different high grade talents that’s going to get expensive after awhile.

            I think herein lies the beauty of this middleware: think of hiring in half as many or a quarter as many actors but rigging them to act out different characters using this process?

            They currently already do this with some films/games, having a single actor play multiple roles. But the cool part is that you don’t have to wait to get the feedback, since you can see how well the performance plays out in real time.

            The first thing that came to mind is that this would be perfect for Mass Effect-style games because it solves the animation issue and they could also capture the voice-acting within the same session.

            If they could also store the data as calculations instead of raw animation files, you could procedurally run the animation sets within the engine without the raw files taking up extra space, thus solving the massive storage issue that was created with the MotionScan tech.

            It’s obviously not perfect, but if they don’t screw it up, and if they can make it cost effective, I would love to see what mid-budget (game and movie) studios could do with tech like this.

          • Phasmatis75

            There is one huge issue with hiring only a few people for the technology. The level of detail the technology employs makes it difficult to reuse the same actor for multiple roles. Else wise the faces will either look wrong because the portions don’t match up, or it’ll require a lot more time to get it to work right but still save money, or you’ll end up with what happened with Bethesda and Oblivion where every face looks exactly the same.

            I must profess I do not have the expertise you seem to have with the technology, though I can read along just fine.

            When it comes to storing it as calculations, that begs the question of why bother in the first place? It would be more cost effective for the company making the middle ware auto animation software to mocap as many different faces talking as possible using higher levels of fidelity, storing them as calculations and have the software render it based on those stores based on the proportions of the characters in question. I may be wrong but I swear I’ve heard someone is working on just that.

            I see the technology failing frankly. Not because it doesn’t have its viable uses, because I believe it does. It will fail because people will use it as a cheap alternative without putting in the required amount of effort to make it look right.

            It’s not the first time I’ve seen great technology like this die. Darkest of Days demonstrated a rather impressive AI system that as far as I am aware did not receive wide spread adoption. Despite being one of the better AI’s I’ve encountered in video games.

            I do wish I was more versed in technology as you are so that we could hold a more interesting conversation. I do hope my layman understanding is okay.

          • The level of detail the technology employs makes it difficult to reuse the same actor for multiple roles. Else wise the faces will either look wrong because the portions don’t match up,

            This is a potential problem with the tech. And you’re right that it would require artists and rigging technicians to take some extra steps to have different face meshes to meld with an actor with a completely different facial structure. This kind of tech would have been perfect for something like Cloud Atlas where they had the different actors playing different races/genders, because it still could have been the actors but instead of the odd looking prosthetics, they could have attached their performances to life-like renders.

            When it comes to storing it as calculations, that begs the question of why bother in the first place?

            Well for movies it saves on space and time. For games it allows you to reach near life-like performance capture like LA Noire but without taking up as much physical space, which is once again a very cost effective way of making cinematic-style games without requiring multiple discs, etc.

            It would be more cost effective for the company making the middle ware auto animation software to mocap as many different faces talking as possible using higher levels of fidelity, storing them as calculations and have the software render it based on those stores based on the proportions of the characters in question.

            Yes, this is what they’re doing for Star Citizen. It looks okay, but you can still tell where they’re calling on the animation set as opposed to it looking natural. The uncanny valley effect is definitely in play with Star Citizen. It’s a useful method and avoids requiring constant performance/mo-cap sessions, but you’re also going to lose a lot of fidelity in the process since algorithms still can’t quite call animation sets to run at the fidelity of a real-time performance capture.

            A lot of it boils down to nuance. It’s why Naughty Dog doesn’t use traditional facial capture but uses reference material from the performance capture and then manually have the artists go in and manually modify the face animations to mirror the nuances of the actors. It’s still not perfect but it’s closer than what you get with the algorithms, like with Mass Effect or The Witcher.

            I may be wrong but I swear I’ve heard someone is working on just that.

            This is actually the standard tactic most companies use for games that have a lot of animations and tons of dialogue sequences. They sometimes capture a range of facial movement and behaviors and then attach them to emotive call-sets. So you’ve got angry, pensive, reluctant, sad, etc., etc., and then they attempt to attach the animation sets to vocal intonation and phonetic variance. Some work better than others, but all the current gen tech that uses that method has fairly wooden looking results.

            In a way, to their credit, Ninja Theory’s method makes so that what you see is what you get from the performance to the final render. So technically it even removes the need for animation artists and engineers to have to go back in and clean it up if they think it looks good in the real-time runtime environment. You can either save the entire animation set as a physical data fabrication (or an animation prefab) or store the calculations and have the Unreal Engine 4 run it in real-time during the actual game .

            The benefit of the animation prefab is that you don’t have to worry about any glitches or hiccups causing things to mess up during real-time, where-as with the stored calculations it would require less physical space but if there’s an error in cycle loop or a hiccup in the loading of an instruction set, it could end up causing some funky looking glitches. Either way, it’s a really cool way to cut down on all of the intermediary hassles that come with the performance capture process.

            But you’re right that it’s a complete toss-up as to whether or not it’s something that could end up being widely adopted and keep Ninja Theory afloat. I would prefer it did only because lowering the costs of higher-fidelity rendering, animation, performance capture, and HD assets would benefit the industry in the long run.

          • Phasmatis75

            Please excuse my lack of using quotes. It’s such a pain in the arse to formatted that and while I recognize it might look slightly unprofessional, I’d prefer to use a system of divides separated by “-“s with occasional quotes in quotation marks when jumps occur in discussion. I hope this is okay.

            I still say it would have run into the issue in Cloud Atlus (which I haven’t seen) of multiple characters looking alike. While I liked Guardians of the Galaxy how all the alien races looked like reskinned humans was extremely off putting.

            The technology seems like it would be a great way to save money but also would have the effect of looking cheap compared to the more time consumer alternative. Factoring in returns and costs would be important in deciding whether to use this technology.

            For some of the cheaper animated movies where everyone is going to look the same regardless (like Lego) this is something that could be very useful.

            As much as the industry loves to boast about life like faces, I’ve never seen it particularly high on gamers demands until it’s ME:A levels of bad or uncanny valley levels of off putting.

            Perhaps it is a side effect of growing up with atari, Nintendo, and sega but that kind of thing just isn’t important to me and worse when they attempt it the facial expressions always look as bad as they do in the movies. Some voice actors maybe able to emote fantastically, but facially act not so much.

            Perhaps in this regard they are barking up the wrong tree.

            Uncharted is probably not the best example to use, mostly the series is praised for its environmental graphics rather than it’s facial animations. Elana in particular spends her run time looking like a stuck up SJW bitch, and consider she is played by one it does resolve the issue of why.

            Frankly while Uncharted 2 looked amazing at times, The Last of Us did not. I’m frankly baffled at how people can call that a cinematic experience, but then again their fans have been the more dude bro variety, so perhaps it is in eyes of the demographics that purchase their games. With great glee I see Naught Dog is finally going so full SJW that their dedicated audience is fleeing enmass. Though off topic this appears to be a consistent issue with Sony across the board. Facial realism is simply not a selling point for most consumers.

            As for algorisms I wasn’t talking about randomly taking slices of animation sets and piecing them together. I was more referring taking a set of data points, determining an average and letting a program render the face based a a lot of set points organically instead of slapping on set bits of animation already in the system. An adaptive face rendering software if you will.

            I’m guessing you’ve had some industry experience from the way you talk. Care to share?

            Though I disagree that this will remove the need for engineers and artists. It will greatly simplify their jobs, but no software is perfect and as I said before uncanny valley of their capture is unsettling. Given voices actors are known for their voice capacity and not acting skills the technology eliminates many voice talents that are otherwise incapable of acting.

            With the preferred method of actors reading a script in seclusion this will be shown in the facial expressions captured by the software. What will be most commented on is the lack of emotional reaction to certain dialog or inappropriate reaction.

            This will either have to be solved by redoing how voice acting is captured, or having engineers and artists improve the visuals by altering the faces to flush out the character models better during scenes. In either instance I can see more money being required to a point where it might come down to the question of “does this technology save us enough money.” That right there might be the downfall of the technology, if it works as advertised and that might not even be the case. It for sure isn’t the case 100% of the time.

            Moving toward more easily captured realism might help some projects with their vision, but it also brings to the forefront of the issues currently with voice acting itself and how the industry goes about recording the dialog compared to animated movies where the actors are allowed to play off each other.

            If Hellblade doesn’t pay off and properly showcase the technology being exceptional I can see it dying here in this form. Sadly worse is if the game is considered horrible even with working technology it being associated with a flop could be it’s death sentence as well. If the other three licences projects fail then it will die. As you are well aware it’s never management’s fault it’s that damn software not the policies of the business.

            I’m indifferent to their survival. They’re an edgelord studio as far as I’mc concerned, so if they do die it will probably fill me with a tiny bit of joy. DMC was a horrible insult to Devil May Cry so /v/ will definitely celebrate their demise. They have this one last chance to redeem themselves in the eyes of the public, but they come out with “Mental Illness journey, look how realistic we got mental illness. We even consulted a professional in a field where 80% of what they claim is proven false. (I wanted to enter the field, job availability stopped me)”. That’s not wowing, and now we see very little game play, the faces look unprofessional since they’ve hit uncanny valley, and the studio is more banking on their technology performing than their game being widely enjoyed.

            A company that doesn’t understand the primary success will come from getting their game to be acclaimed by the fans is the sure fire way to have their technology sell. Akin to how Microsoft grew from the gaming sector. This is not a company I can say I’d be terribly sad to see die. Especially since I’m sure a more capable team will pick up the technology if it has any merits.

          • The technology seems like it would be a great way to save money but also would have the effect of looking cheap compared to the more time consumer alternative.

            Not really, since you don’t lose anything by using it. The old method requires at a minimum about two weeks of clean-up and optimization. So no matter what, you’re automatically knocking at the very least, two weeks of extra production off the clean-up, porting (since it would all be done in UE4 instead of transferring the data from a mo-cap suite into UE4) and you have the actors on set for fewer days.

            Clean-up would be required regardless in some capacity, you would just be able to see what the final render would look like during the actual performance capture, so there’s no back and forth delay that usually occurs with motion/performance capture and then going through the week(s)-long process for the final render.

            Some voice actors maybe able to emote fantastically, but facially act not so much. Perhaps in this regard they are barking up the wrong tree.

            It would definitely give theater actors a lot of room to play in. Voice actors who can’t physically act would definitely end up not being hired more times than not, but given the industry’s penchant for hiring in television/movie actors anyway, I don’t really see that as being too much of a drawback. It might even help with the overall performances since most times you can’t tell a Hollywood actor is even in the game since they just phone in their voice work and have poorly done animations for the character to match.

            Surely, seeing someone like Natalie Dormer in Mass Effect would certainly raise the appeal of the game, and also at least give the actor a bit more feedback (and maybe incentive?) to not just lazily turn in a performance. But that’s more of a artistic preference than any actual fault of the tech.

            Uncharted is probably not the best example to use, mostly the series is praised for its environmental graphics rather than it’s facial animations.

            I may not like UC4 but Naughty Dog’s facial animation technicians did do a pretty good job capturing the nuances of North’s facial expressions for Drake. I would definitely be curious to see how well something like that would look using Ninja Theory’s injector and North’s natural acting ability. Could North visually emote as well as Naughty Dog could animate the thespian’s face? It’s an interesting design prospect when it comes to the artistic value of optimizing the pipeline.

            As for algorisms I wasn’t talking about randomly taking slices of animation sets and piecing them together. I was more referring taking a set of data points, determining an average and letting a program render the face based a a lot of set points organically instead of slapping on set bits of animation already in the system. An adaptive face rendering software if you will.

            This hasn’t really been a viable way to animate character faces due to the fact that raw math has no objective sense of beauty. And acting is all about gesticulated aesthetics: Portrayals for the preference of idealized emoting.

            This isn’t to say that what you’ve described doesn’t exist — it does exist, but it looks more uncanny than most other facial algorithm solutions on the market. Here’s a test of an algorithm organically animating a face without key-frame data or any prefabricated animation sets.

            https://youtu.be/ZqYqGUxbAb0

            It’s pretty frightening.

            Moving toward more easily captured realism might help some projects with their vision, but it also brings to the forefront of the issues currently with voice acting itself and how the industry goes about recording the dialog compared to animated movies where the actors are allowed to play off each other.

            Well this would definitely help with actors playing off each other because they’ll be able to see exactly how their characters will look during the actual acting process. For games that already do performance capture (which is most AAA games) and using Unreal Engine 4 (again, a bunch of games classify in this category as well) you don’t really lose anything with this method. More than anything, I would imagine actors would be happy to be able to see what the final render would look like while they’re acting, it might even help with their performance.

            Of course, this is assuming they’re good actors and that they can also do some decent voice acting as well. Voice actors who can’t physically act could end up being put out of work if something like this caught fire and became industry standard.

            This is not a company I can say I’d be terribly sad to see die. Especially since I’m sure a more capable team will pick up the technology if it has any merits.

            It all depends on management, the deals they strike, and their long-term goals. I wouldn’t mind if Ninja Theory became more of a support studio for middleware, like IDV, Epic, or Crytek.

          • Phasmatis75

            Yet your still going to need cleanup for motion capture anyway since action scenes are still motion captured as well and require sets. Thus you will already have the employees on hand and the work already being done.

            This brings a huge concern about the viability in lue of the fact that the old methods will still be used to capture action and movement. If we were just going for portraits then there would be no issue, but there are a lot of variables.

            Hollywood actors and actresses are barely capable of acting as is. Their star power has greatly diminished in the cultural zeitgeist with most seeing them as out of touch hacks.

            I can’t say I’ve ever heard of anyone being stoked for a game because it had actor X or actress Y. I’ve seen screw you guys for replacing the actor, and not this bitch again, but never “OMG a hollywood voice at 5 times the normal price is in this game, I don’t care about gameplay, graphics, presentation, story, just that it has that actor in it.” I have seen though “wasn’t that the game that had X voice act?”

            Getting Hollywood talent is expensive and frankly only a handful of actresses are capable of emoting. Hell I almost hope this technology gets adopted now to force actors and actresses out of the industry as a result of their expressionless faces.

            Either way the issue is better talent that can emote costs more money.

            Aye I’m willing to accept that. I absolutely hate 2 but I’d be lying if I said it didn’t look impressive at times.

            In the video game industry voice capture is done in isolation while reading off scripts for the most part. Come to think of it, if we add emoting into this that’s going to extend the process and costs. Especially if they have to come back and reshoot entire scenes.

            I’ll be 100 percent frank. As of this point I wouldn’t invest in the technology if I had the money. It has too much going against it, as previously mentioned that include a littany of unexpected costs, it only replaces the face capture not action capture, and the uncanny valley effect. The top two greatest hindrances to this technologies adoption is going to be costs and acclimation time. I don’t see a lot of publishers willing to take a hit on a title for their developers to get used to the technology.

            Though I can say I do see a place for it with those crappy 3d shows. No one’s going to care about uncanny valley there.

          • It has too much going against it, as previously mentioned that include a littany of unexpected costs, it only replaces the face capture not action capture,

            Actually you can also do the whole performance capture for the motions as well, so you can nab it all in one go. So technically it would also help out capturing movements in real-time. You could also go super-cheap making a fighting game by using two Kinect 2.0 devices as the cameras and then feed the data through the injector into the UE4 so you could capture all your fighting moves right there in-house and save all the data sets right there in the tree blend of the UE4.

            If I ever decided to make a full-on fighting game that would definitely be the way to go, since it’s like 1000% faster than key-framing the data.

  • Disqusted

    Header image looks like James Cameron wet dream material, when he’s not busy saving the galaxy from climate change.

  • Richard

    “One might be convinced that Ninja Theory was trying to sell gamers a partially interactive movie for $29.99.” The Order 1886, Ninja Theory should remember that this did not sell well, if that is what they are going for.

    • At least with the Order 1886 they did have some actual gameplay videos leading up to release. But yeah, it was basically a four hour interactive movie.

  • totenglocke

    I’m intrigued by the trailers, but I won’t drop money on it until I see reviews and gameplay.

  • Joshua Anderson

    I can’t say their lack of showing gameplay is a good thing. A game is 90% gameplay, and without it, you may as well watch a movie. I’ll continue to keep my eyes peeled.