Digital Domain Welcomes Gabrielle Gourrier as Executive Vice President of VFX
Digital Domain appointed Gabrielle Gourrier as Executive Vice President of VFX Business. In this role, Gabby will focus on expanding client partnerships and establishing strategic initiatives.
Chaos V-Ray 7 for 3ds Max brings Gaussian Splat support for fast photoreal environments, new ways to create interactive virtual tours and more realistic, controllable lighting to 3D rendering.
VFX Supervisor Morgan McDermott at Impossible Objects talks about making their immersive film for UFC’s debut at the Sphere, combining heros and symbols of Mexican independence with UFC legends.
Chaos puts Project Arena’s Virtual Production tools to the test in a new short film, achieving accurate ICVFX with real-time raytracing and compositing. Christopher Nichols shares insights.
Moving Picture Company (MPC)has appointed Lucinda Keeler as Head of Production for its London studio, bringing over 20 years of experience and leadership in the VFX industry.
REALTIME studio has launched a Virtual Production division, following its grant from Media City Immersive Technologies Innovation Hub to develop a proprietary Virtual Production tool.
ZibraVDB plugin for Virtual Production and CG studios delivers high compression rates and fast render times, making it possible to work with very large volumetric effects in real-time.
Maxon One 2025 updates Cinema 4D, Redshift, Red Giant and Cineware, and releases ZBrush for iPad, putting ZBrush sculpting tools into a mobile device with a new UI and touch controls.
Das Element asset library software version 2.1 has new video playback controls, hierarchy tree customisation for libraries, faster set-up processes and simpler element migration.
Autodesk returned to SIGGRAPH 2024 to show software updates that include generative AI and cloud workflows for 3D animation in Maya, production scheduling and clip retiming in Flame.
Shutterstock launched a cloud-based generative 3D API, built on NVIDIA Edify AI architecture, trained on licensed Shutterstock content, as a fast way to produce realistic 3D models with AI.
Freefolk has promoted Rob Sheridan to VFX Supervisor in their Film and Episodic division and Paul Wight is now the company’s first Chief Operating Officer.
View of Sydney Opera House from the Sirius Building.
Architects traditionally use highly detailed, accurate visualisations to bring their blueprints, sketches and concepts into the real world, and sell their designs to clients. Production company Binyan Studios specialises in creating such visualisations, applying their own distinctive approach to depicting how concepts and designs will look once they are built. Their work ranges from 3D renders, film and animation and broadcast-quality commercials to immersive VR tours.
Under the direction of CEO and Founder Andrei Dolnikov, Binyan’s teams work from facilities in Sydney, Melbourne, Brisbane, London, New York and Los Angeles, and service clients including property developers and architects who need to sell, lease or promote new developments. Sir David Adjaye, Frank Gehry, B.I.G. and Zaha Hadid are among the architects they have worked with.
The studio has developed an Autodesk 3ds Max pipeline they use to transform concepts from into narrative visualisations. These visualisations are then used as imaginative marketing tools to tell stories and create experiences that take clients on a journey through their architectural designs, ultimately selling them on the concept.
Trend Setting
Binyan recently completed renderings that transform the Brutalist-style Sirius Building in Sydney, originally built in 1980 to house working class residents, into a new development that features spectacular views of the Sydney Opera House and Harbor Bridge. Andrei said, “In Brisbane, we worked on a vertical garden that was designed for Aria Property by architect Koichi Takada. We’ve also recently worked on a photoreal animated film for the Rivière by Aria in Brisbane, as well as renderings for a resort-style development for the Howard Hughes Company in Honolulu.”
{media id=166,layout=solo}
Binyan has initiated trends that are seen everywhere in today’s design visualisation industry. For example, Andrei and his team focus on a more artistic approach when developing visualisations, using photography as inspiration to transform architectural sketches into full 3D renderings that evoke emotion and memories in people, and convey a narrative story behind the work.
“The images we create may be commercial art, but still need to follow the principles of composition, lighting hierarchy and narrative,” Andrei said. “I’m always looking outside our immediate industry for inspiration, viewing things like new product launches, art installations and the entertainment industry to bring new depth into our work to really capture clients’ attention.”
Photo Inspiration
“Photography is all-important in our work, used as reference to achieve the photoreal look we need,” he said. “We look at great photographers' work for inspiration as well to develop a specific look for each project. Our projects are so diverse in terms of type, location and demographic that it’s vital we capture the essence of each venue we are trying to bring to life. A San Francisco sunset looks different from a Dubai or Melbourne sunset, and the harbour lighting in Sydney looks different from a Brooklyn morning sunrise.
“We capture the photography for backplates, plus foreground details, and use this material for photomontages. For preference, we’ll shoot our own images wherever possible or brief a photographer. Then we matte paint or composite the images in Photoshop to create still images, and for our animations and films, we use video compositing software.”
Riviere, Brisbane QLD
Design Workflow
The typical design workflow begins when Binyan receives a client brief, detailing the project, brand guidelines, overall look and target demographics. Once the tone, artistic direction and project style are approved, Binyan begins the 3D modelling phase in 3ds Max – the studio’s main content creation tool – to model each of the distinct areas that will be illustrated in the final visualisation.
“We typically receive 3D DWG files or Revit files. The more precise they are the better, but even when we get a client model, we do a lot of our own modelling to be able to inject the extra level of detail our clients expect. We are geekily passionate about detail ourselves – it makes all the difference.
“3ds Max works especially well for us because of its complexity – it has tools we use to approach projects in a granular way, taking an up-close look at every minor detail. Everything is controllable, everything is possible. Just about whatever a client asks for, we are able to deliver,” said Andrei.
Real-time Iteration
One 3ds Max feature that stands out for Binyan’s style of work is the ability to view iterations in real-time. The ActiveShade feature starts an interactive rendering session in a viewport that allows you to see your scene in near-final render quality as you work. Whenever you adjust vertices and apply transforms to geometry, or adjust lights, camera or materials, directly in the viewport, the results will be automatically updated to show the final render. The rendering quality and interactivity depends on the renderer you are using.
Sirius Building, Sydney
3ds Max also has robust retopology tools that automatically optimise the geometry of high-resolution models to create clean, quad-based meshes. For example, the tools remove artifacts and other mesh issues ahead of animation and rigging, or when applying textures and manipulating objects. When the model's vertices, edges and faces are well- organised, animations are more fluid and rendering needs less memory – holes, spikes and isolated elements take extra processing time.
Since retopology works best on objects with uniform, equally-distributed faces, modifiers are available to optimise the mesh first. The Subdivide modifier creates smaller, more even triangles, and Smooth applies smoothing groups to the mesh. Mesh Cleaner will check and fix errors in data imported from other applications.
Rendering and Post
During composition, the Binyan team explores the best time of day, lighting and angle to portray the architectural model, later managing colour production, texturing, lighting and landscaping.
Once the client has signed off on a near-final draft, the studio renders all files and completes post-production, using tools like Adobe Photoshop for stills, Houdini and Phoenix for animation, and Fusion for compositing, among others. For digital experiences or VR projects, the team takes over the installation of the physical hardware and custom software development to help the project come to life exactly as required.
Ward Village pool house, Honolulu
Story at the Core
As storytelling hasn’t traditionally been associated with architectural visualisation, Binyan has worked out their own processes and made it a signature in their work. Andrei said that storytelling is in fact at the core of what they do and the foundation of every image, not to mention their films, animations and immersive experiences.
“Even for a humble bathroom CG still image, we ask ourselves – who is the audience, how do we want them to feel when they look at this image, and how will we achieve that through composition, lighting and detail?”
To answer those questions, the story remains front and centre. While it’s always a collaborative process, Binyan typically originates the key concept, based on a thorough brief from the client. “This brief includes the brand identity and graphic design elements for the overall campaign from the design agency and the architectural vision from the architects. Our in-house Directors take these ingredients and propose two to three ideas or treatments for the client group to review and from there, the chosen direction is fleshed out into a storyboard and animatic.
“Then we coordinate the production of each scene, which can range from purely CG/3D scenes and VFX scenes, to live action scenes involving talent, hair and make-up artists and a full cast-and-crew shoot on location. The shoots usually last two or three days and frequently include green screen shoots that our VFX supervisors lead.”
Architecture in Motion
Ward Village pool at sunrise, Honolulu
Binyan’s use of motion and animation makes their projects varied and interesting, incorporating camera moves, FX and procedural effects, lights, motion graphics. Decisions about the approaches are all driven by the idea as much as the budget, beginning with concept development and becoming concrete at the storyboarding stage.
“As the budget is a known factor from the outset, we’ll pitch an idea based on this – the greater the scope, the more complex and memorable the FX scenes will be. Motion graphics are determined by the kind of production we are working on. If the objective is to explain a complex scheme and to 'introduce' the audience to a project, then we will propose more motion graphics components.
“On the other hand, more ‘product-based’ films – where the audience is already informed about the developer and architect and simply wants to fall in love with the architecture, views and so on – we will go for a more pure 3D approach like the Riviere animated film.”
Time Factor
Because projects often take years to develop, time is another work factor. Helping clients use visualisation to get started on major projects while they are still in an early design phase adds to the challenge. “For example, we are currently working on a large, complex project in California that will completely transform its precinct. Still in its early days, many elements are at a more advanced stage than others.
Burleigh Heads, Gold Coast, QLD
“The project will be developed over several years, but meanwhile they need us to create accurate visualisations of how the final product will look. This process is both exciting and challenging, and it takes all of our experience to bring it to life. In the end, though, it’s very gratifying.”
In the midst of a global pandemic, the company has witnessed increasing demand for visualisations, which can be easily accessed by remote clients and shared online. He describes the work environment as demanding and high-performance, calling on them to handle a diverse body of work – “everything from a small bathroom to a massive masterplan on top of a mountain in Saudi Arabia,” he said.
It’s a super exciting time for us as well. Animations and digital experiences like immersion rooms, sales gallery activations, media tables and so on, are becoming a component of many projects we work on. We are really investing in our talent, hardware and software to be able to grow in this space. Real time rendering is a big part of this too, and is finally becoming a realistic component of a photo-real workflow. We see very, very fun times ahead.” www.autodesk.com.au
In March 2021, Foundry introduced machine learning into Nuke 13.0 with the addition of the CopyCat node, a tool that takes the process of building custom tools beyond scripting, Python extensions and SDKs. CopyCat hands users a process whereby they can show a network exactly what needs to be done to complete a task, how it should be done and to what standard.
Starting with reference frames from a sequence, artists can use the CopyCat node to train neural networks to automatically complete tasks ranging from rotoscoping, clean-up and beauty work, to correcting focus and other camera effects. The artist inputs a selection of the original frames, each matched to a frame showing what he or she wants them to look like, called the Ground Truth, and effectively trains a network to replicate the desired effect across all frames in a shot.
Training neural networks to work on video is an application of machine learning (ML) that the software developers at Foundry have been interested in for some time. Digital Media World had a chance to speak to Foundry’s Head of Research Dan Ring about the history of ML at Foundry, what they are working on within this field now, and what directions they plan to take it in the future.
From Research to Studios – Bridging the Gap
Foundry’s research team launched a project in 2019 called ML Server – an open source academic exercise for users to test. One of its goals was to determine the level of interest that exists in their market for machine learning. Dan said, “ML isn’t a new development, and many people are still thinking in terms of ‘deep learning’, layers of operations stacked on top of each other to mimic human neurons. So far, the way human neurons work has proven incredibly complex to recreate, but the research being devoted to tasks that rely on image recognition has been more promising.”
“That research led to thinking about image-based tools for compositing. Images are a dense source of precise information, and image recognition is a common element of VFX work and editorial. Furthermore, creating masks base on images is a major element of compositing. However, bringing the findings of academic research into practical applications has been difficult, and bridging that gap was a further motivation behind ML server.”
For instance, the computation and code for academic applications and experimentation don’t have to be extremely robust, whereas to be useful in VFX applications, coding has to perform consistently in virtually any situation. ML Server helped define practical constraints as well, such as graphics hardware requirements, which operating systems were relevant and what interests artists the most about ML for their work today.
CopyCat can be used to create a precise garbage matte for the woman in this video. (See the image below.)
Supervised Learning
As a consequence of that project, Foundry felt compelled to put ML into a set of tools that artists could use in their own way to solve their own problems. “Foundry did not want to be sitting in the middle of, or interfere with, that problem-solving process. Instead, the purpose of CopyCat is to allow artists to train a neural network to create their own sequence-specific effect in Nuke.”
At the simplest level, ML is supervised learning. You have some data you want to use to develop a model to achieve a certain task. So you go through a process of training, assessing the results and refinement. For the model, learning at this level happens largely through making mistakes and being corrected.
To address the challenge of transferring that process to VFX production, we can use the model’s ability to improve itself and, more specifically, to improve by doing the task your way. “Here is the data I have, HERE is what I want you to pull from it.” An example of a practical application would be - “I give you an input image, you give me a useful mask based on that. I’ll show you some examples of what I mean by a mask, and then you create a network that can generate a mask from any of the other frames.”
Shot-Specific
This was the premise behind Foundry’s ML Server. In fact, the pool of artists who were testing the program loved using it. Even though it wasn’t quite ready for production yet, they started trying to use it in production anyway. Once they understood that users do want access to ML in their software, Foundry wanted to find out exactly what they were using ML Server for. That was where CopyCat node came from – the result of transferring those capabilities into Nuke as native functionality.
“The core principle of the node is based on take an image from your source media, and then giving it to the system with a specially created frame – for example, a keyframe – as an ideal outcome,” said Dan. “Once you have given it several more examples – in the tens, not hundreds – the network will then try to copy your work across all frames as it continues to develop, but in a manner generalised only as far as that shot extends as defined by your samples. In other words, the system will be shot-specific.
“That is a key difference between CopyCat and a traditional ML model. No generalising is done for the wider world. You are training this limited-application model for just what you are doing right now. This approach allows artists to work at scale, accelerating their work with the option to use the same trained network on multiple sequences.”
Inference
The CopyCat tool consists of two nodes. Training networks is done with the CopyCat node – you give this node your example pairs and generate a .cat file. But while this node has intimate knowledge of the training pairs and the relationship between the two images in each one, that is all that the .cat file knows about. To do actual work, it has to be passed on to the Inference node, which applies the .cat file’s knowledge across the rest of the frames to create the effect modelled by the network.
Foundry split these functions for two reasons. One is hardware efficiency. The learning process that builds the knowledge into the .cat file takes more compute power than applying the effect. After that huge learning effort, the Inference node can in fact run on a laptop.
Dan said, “Another reason for separating the CopyCat node from Inference is to make the .cat file more flexible and recognise its value as a piece of IP in itself. Nuke has native support for your .cat file, and many existing models trained in other software can be saved as .cat files as well by adding certain types of metadata. Both can be used in Nuke with the Inference node.
The Value of an Artist
“Instead of working as a predefined tool that does only one thing, CopyCat puts machine learning within reach of an artist’s imagination and can therefore do lots of things, depending on that artist.” This process empowers artists, but can’t replace them because the trainer is the artist, not the model. The artist is isolating the tedious, repetitive portions of the work and teaching the model how to complete those portions to the necessary standard.”
The idea that artists will somehow lose out through the adoption of ML tools is obviously of primary concern. A roto artist working manually will normally finish 60 to100 frames day, and thus a 200 frame sequence might take two the three days. Using ML can bring that down to two hours.
However, Dan also said, “One thing that our rotoscoping example highlighted to us was the value of the artists we were working with. A good roto artist can elevate a shot to something beautiful and totally believable. It takes a long time and enormous skill to be really good at the work and, therefore, to be able to train a model effectively. As the network can only imitate the style of its trainer – precise and conscientious, or haphazard and inconsistent – the work needs a good artist as well as someone who can select the most relevant series of reference pairs.”
Beauty repairs achieved with CopyCat.
Applications
Another application Foundry has tested with CopyCat is beauty repairs, which can be labour-intensive due to subtle changes in lighting that may occur across frames, or to shifting camera angles. Artists can remove beards, tattoos, bruises and scars this way.
An even more valuable application is correcting focus issues in video, for example, bringing a blurred shot into focus. The artist searches through the video to find the frames, or even just parts of frames, that are focussed correctly. As before, show the network before/after pairs, as in the roto mask and beauty examples, and ask it to discover relationships between the source material and the desired outcome.
“If the network turns out a bad frame or two during training, you don’t have to re-start the process, said Dan. “The system is cumulative, incrementally building up expertise. You can just correct the bad frames in the same way as the others, and add them to the training file – that is, the .cat file – of before/after pairs. The system will continue from there.”
Creativity
The other, critical side of artists’ contribution to projects is creativity. Networks and models are capable of extreme consistency – that is what they are good at and what CopyCat takes advantage of. Creativity comes from elsewhere.
Masking, beauty work and re-focussing have traditionally been hugely time-consuming, costly issues for compositing. The cost comes not only from having to pay for hours of manual work, but also from missing out on creative ideas that never had a chance to be developed ahead of deadlines.
“That potential was evident when Foundry found that the artists testing CopyCat were using it for a much wider variety of interesting, specific tasks – de-aging, animation corrections and anything that benefits from a repetitive, automated approach and absolute consistency. Training a tool to detect and flag animation errors, or even go on to fix them, at the QC stage is a use case with a lot of scope that just needs creativity to expand,” Dan noted.
Extensibility and Data Quality
Foundry’s developers want to develop scope for extensibility in CopyCat for artists who train networks, whether these be .cat files originally built in CopyCat or others. They especially want to encourage artists to bring their own models to the pipeline, not just bring data to Nuke and wait for an algorithm to be developed for them.
Training a model to de-blur video.
“But it has to be done properly, just as developers have done with their C++ SDKs for Nuke, scripting and Python Nuke extensions,” Dan said. “These now have conventions for use as part of Nuke. We’re aiming to do something similar for users’ ML models as part of our effort to bridging the gap between academic research and production."
The high quality of the data that VFX artists give their networks also gives them a head start in terms of training time and use-specific tools. Video from media & entertainment projects usually has a high dynamic range and resolution, accurate lighting properties and approval from a director and DP. In the past, neural networks were set up to chew through large volumes of relatively low quality, available images, and could take a long time to train into a workable model. When it comes to creating useful tools, however, starting with a good artist and good data will shorten training time.
VFX Wish-list
“VFX studios have fantastic data sets for training networks. Foundry wants them to know that they already have the means to train models for many applications. We don’t have that data ourselves, but have developed CopyCat to help them take advantage of those data sets. The quality of their data, combined with their artists’ skills, can make them more competitive," said Dan.
Ultimately, tools like CopyCat will not remove artists’ jobs – it will make VFX work more creative. Today’s short timeframes mean the wish-lists associated with jobs may only ever be 40 percent completed. With ML, teams may be able to get to 60 percent of items on those lists. ML gives a studio an opportunity to move past the busy work and solve the harder problems, squash repetitive tasks and move on to the creative faster. It may also mean VFX artists have a new skill to acquire – quickly and efficiently training neural networks.
Smarter Roto
Dan also talked about how Foundry will continue to progress the use of ML in its software. An on-going project with University of Bath and DNEG, called SmartROTO, addresses issues of correcting models in the course of training. They wanted to figure out how best to correct the model by telling the system, in a meaningful way, that it’s not working so that it understands its work has fallen short of the goal, appreciates the problem and tries to improve its work.
Training networks to recognise arbitrary shapes that change over time requires interactivity.
He said, “It called for something more interactive than preparing a new ‘after’ frame and making sure that all feedback is detected, interpreted and applied correctly. We needed to give the network a region of ‘correctness’ and developed two models, Appearance and Shape, both of which test whether a given image is visually similar to the desired outcome. But they test for that in different ways.
“The Appearance model determines whether a given arrangement of pixels looks like something we've seen before, while the Shape model constrains the set of possible matches, saying 'force the current silhouette to match a silhouette I've seen before'. Typically, ML models for images only use Appearance models, that is, they're given an image patch and asked to classify or transform it – for example, determining whether or not an image contains a dog.“
Appearance models work well in most cases, but have no meaningful concept of the shape or structure of the element being worked on, which is much more important for roto. The Shape model encodes this idea of structure, and validates what the Appearance model has found.
“If we are rotoscoping an arm, it will only encourage arm-like shapes and reject others," Dan said. "In SmartROTO, we show what an arm looks like by giving the system a handful of keyframes to use as its base model. Every additional keyframe from the artist updates the model further. By the end, it has a solid understanding of the object and all the possible ways it deforms in the shot.” www.foundry.com
Cloud rendering services are a form of remote working and of virtualising hardware-based processes that studios have traditionally handled in-house. Such services have been evolving for several years and the market has now grown competitive as off-site rendering companies continue to improve, specialise and extend their functionality.
GridMarkets is a rendering and simulation service that supports freelance VFX artists and studios that have complex, time-intensive projects to manage. The company develops a platform of submission plug-ins supporting the most commonly used software applications for rendering, animation and simulation. These include Maya, Houdini, Nuke, Cinema 4D, 3ds Max and Blender content creation packages, and several renderers including Arnold, V-Ray, Renderman, Redshift, Mantra and others.
GridMarkets designed and built their platform by collaborating with VFX teams and continuously gathering input from them to keep further development to the plugins relevant to their work. The company uses 32 vrtual CPUs @ 3ghz/core with 60 Gb RAM and Tesla K80 or P100 GPUs, and has access to thousands of machines. Digital Media World had the chance to talk to Co-founder of GridMarkets Mark Ross about what makes this service different to all the others, and how it works.
Plugins and Pipelines
“To set the company apart from its competitions, GridMarkets focusses on several factors, one of which is extreme scalability,” Mark said. “A facility may not have a need for massive render capacity and power on a regular basis but when they do, knowing that they can specify the necessary number of high powered machines to finish on schedule is critical. To be able to scale directly from the team’s hardware, Gridmarkets’ service has an API for integration into their render farm and pipeline management system.”
The platform is also simple to work with, both to set up jobs and to pay for them. The submission plugin allows you to submit from your creation software’s UI, choose a preferred renderer, number of machines and GPUs.
The pipeline and workflow are different for each type of software. For Houdini, for example, a special pipeline has been developed to manage cloud simulations that includes uploading the dependency tree and all simulation inputs, plus caching so that the sim output can stay in the cloud until processing is complete. This saves using local storage for repeated transfers of output data.
The Cinema 4D pipeline now works with Release 23. Some of the pipelines include a Nuke node to process frames through a Nuke script after rendering, and a FFmpeg transcode node to compile frames into a video file on the GridMarkets servers.
All of the associated data transfers are via GridMarkets’ purpose-built file management tool called Envoy that uses standard HTTPS for transfers to and from Google Cloud Storage, where the user’s project files and submissions reside in one location in individual account buckets. Envoy is authenticated using Google Service Accounts. Envoy optimises large file transfers for quicker up and downloads, and the workflow allows you to see all submissions to be processed for a project.
Secured Global Environment
Mark said, “By taking advantage of GridMarkets’ global coverage and cloud-based tools, a production can make their project details and challenges visible to a dispersed team in a secured environment. GridMarkets assures users of the security of the service, one of the company’s Number One preoccupations, by following MPAA guidelines in the development and operation of its platform, which means that all of our partners – renderer and other software companies, cloud providers and so on – must comply with those regulations as well.
“GridMarkets executes tasks via a partner network of secured Oracle and Google data centres. The plaform’s security measures focus on access to the network, machine security, authorisation, content management and transfer and monitoring. An integration with Oracle Generation 2 public cloud infrastructure was made recently in order to increase the platform’s reliability and security.”
Oracle built the Gen 2 infrastructure specifically to run enterprise applications and databases. It has tools and utilities for constructing new cloud-native and mobile apps on a unified platform and networking fabric. In GridMarkets’ case, the goal is to establish secure rendering and simulation services for VFX and animation customers who generally prefer to devote more of their time budget to creative iteration than to technical infrastructure.
No Access
Purpose-built,secured virtual machines (VMs) manage the transfer of content. Each VM instance is only used for one job and its files and is then shut down to prevent access to the data by other users on later jobs. All jobs run with normal user permissions with no access to the machine’s administrative functions. Encapsulating the processing units in secure Docker containers within VMs also limits security risks.
No direct outside connection is allowed to any of the virtual machines to avoid exposing the compute nodes to the Internet. Data transfers and API calls within the GridMarkets infrastructure are all made through secure HTTPS connections – wireless communication is not used.
Managed Services and Pricing
“GridMarkets also operates managed services, which are mainly about saving time so that you can stay focussed on the creative demands of the project – and still meet your deadlines. The user delegates submission details to GridMarkets as if it was a member of the team,” said Mark. “GridMarkets then prepares the project files for simulation or rendering and keeps them together, helps manage the submissions of remote artists and optimises the scene and machine count so that the jobs process smoothly without stopping and starting.”
Pricing is handled entirely through pre-purchased credits, which are non-refundable but never expire. To upload and complete a job to the end, you need to have purchased enough credits in advance. A cost calculator is available online to help budget your project. Costs are calculated by machine hours, making it straightforward for on-demand use, but you can also cap your costs in different ways – limiting usage to a finite number of machines over time, for instance, or keeping render speeds down to a chosen level.
In early May 2021, GridMarkets discounted its services to Zync Render customers looking for new rendering options by offering a 20% discount for the first month of using GridMarkets. This offer follows the recent announcement that Zync is ceasing operations in June 2021 and supplies that company’s user community with a comparable solution in terms of features, security and reliability. www.gridmarkets.com
One of Us has opened a new studio in Paris. The company, which is based in Soho in London, has been looking for the best way to expand, and Paris presents the ideal opportunity for them. The creative possibilities, the VFX talent and the projects underway in France make this an exciting moment. As the film industry recovers from the impact of the pandemic, their team feels confident and ambitious about setting up in Paris and continuing our love affair with Europe.
Leading the creative team will be long-time collaborator Emmanuel (Manu) Pichereau, who will be joined by a combination of One of Us staff members and a number of artists from his extensive French connections. Among Manu’s credits are ‘Under The Skin’, ‘Anna Karenina’, ‘Everest’, ‘The Revenant’, Netflix’s ‘The Midnight Sky’ and, most recently, ‘The Matrix 4’. People and projects have always driven the studio’s growth, and the Paris venture begins with a French classic in the shape of ‘Asterix and Obelix: The Silk Road’.
Emmanuel (Manu) Pichereau at One of Us in Paris
As France is the birthplace of cinema and film, a defining part of the national culture, the French have been responsible for many of cinema’s significant movements and moments, actors and directors. Hollywood often looks to Europe for inspiration and innovation. Manu said, “While the wider European industry undergoes a renaissance, with the new investment in production infrastructure, stage space and new techniques, this feels like an exciting place to be.”
As well as having strong relationships with several Hollywood studios, One of Us has a history of working with European filmmakers, most recently on Matteo Garrone’s ‘Pinocchio’ and Luca Guadagnino’s ‘We Are Who We Are’. They also have strong connections with European VFX talent, cultivating relationships with top schools and universities.
“We are working with productions to ensure they make the most of the French tax incentives, which begin at 30% and rise to 40%, with the additional rebate targeted at and triggered by VFX spend, making Paris an alluring alternative,” Manu said.
“Our Paris studio will use a hybrid remote and office-based team, leveraging our London infrastructure. Over the past few months, we have improved our connectivity, tripled our storage and extended our render farm, and with recent innovations in remote working technology, our capacity is more flexible than it ever was previously.”
Emission Importance Sampling in Clarisse 5 - all lighting in this render comes from the emissive property of the materials.
Isotropix Clarisse 5 contains updates that address set dressing, look development, lighting, rendering and workflow. The application now has a multi-purpose PBR material based on the Autodesk Standard Surface Material specification. A new unbiased rendering feature, Emission Importance Sampling, has been added that can save lighting artists a huge amount of time. Also, upgrades to the Clarisse workflow help make it faster and simpler to use – item attributes are organised into hierarchical groups exposing only the most relevant attributes.
Dedicated Set Dressing
The Isotropix development team has introduced a built-in USD exporter in Clarisse 5 allowing entire Clarisse scenes to be imported into all applications that support USD without using specialised tools. This feature means users can now use Clarisse’s set dressing abilities for layout, and then export the scene as a .usd file to complete lookdev and render in a preferred application.
Expecting an increase in users who mainly carry out layout work in Clarisse 5, the team developed related improvements such as orthographic views and camera overlays to display safe frames and composition guides. An entirely new graph editor comes with all tools necessary to layout and animate scenes, and a dedicated workflow that rapidly switches and manages multiple viewpoints in the scene.
Autodesk Standard Surface Material - Clarisse (left) and Maya/Arnold renders.
Look Development Revisited
Clarisse 5 now has a multi-purpose PBR (physically based rendering) material based on the Autodesk Standard Surface Material (ASSM) specification, a general surface shader that has existing support from Autodesk Arnold and Substance Painter. This development is consistent with Isotropix’s intention to move toward lookdev interoperability – that is, ‘lookdev anywhere and render everywhere’, which is the description of Sebastien Guichou, CTO and co-founder of Isotropix. Sebastien has demonstrated that rendering the same image in Autodesk Arnold and in Clarisse 5 produces results that are indistinguishable from one other.
The new version 5 has a revised subsurface scattering engine built with a more accurate random walk and a faster, better diffusion-based model. Therefore, users can switch between these two methods without first adjusting their materials in order to manage the rendering speed and accuracy tradeoff.
Transmissive materials have also been improved in Clarisse 5 with more controls for shadowing and surface thickness management. Transmission scattering simulation can now render dense transmissive materials very accurately, such as thick and viscous liquids.
Lighting Controls
Significant updates to lighting in Clarisse 5 bring new controls to independently tweak light contributions in diffuse, reflection, refraction or volume paths. Sampling of area lights, especially IES lights in volumes, are improved to produce less noise in renders than Clarisse 4 for the same number of samples.
Random Walk SSS in Clarisse 5
A new mode for lights contributes exclusively to AOVs. When in this mode, lights stop affecting beauty and the contribution is retrieved using light path expressions in AOVs. This feature makes it much simpler to set up rendering of additional fill lights used exclusively for compositing.
The ability to turn arbitrary geometriesinto lights and render them efficiently is another critical update. Due to the geometry-instancing capabilities in Clarisse, the renderer can now render virtually billions of textured lights while maintaining very fast render times, low memory footprint and a high level of interactivity.
Render Eficiency Upgrades
Emission Importance Sampling (EIS) is a new unbiased rendering feature that updates the sampling of materials that define emission. It can save lighting artists a huge amount of time because it is no longer necessary to mock up emission using manually placed lights. In effect, this change brings together the look development and lighting stages.
The development team has documented several general rendering performance improvements in Clarisse 5. General ray tracing performance is up to 1.5x faster on actual scenes. Fur and hair now use a new engine that always renders curves adaptively and speeds up render times by up to 4 times, especially on challenging scenarios such as long tangled hair.
Random Walk vs Diffusion in Clarisse 5. Results are very close in these renders, allowing artists to switch between the two modes to manage quality and speed tradeoff without adjusting material settings.
Optimisations to volume accelerate render times by over 3 times with multiple scattering enabled, and a new anti-aliasing importance sampling function improves render times by producing better results simultaneously when denoised. A new BxDF path splitting strategy speeds up specular reflection/transmission paths, and a more predictable adaptive anti-aliasing method has been developed based on standard deviation.
Faster, Simpler Workflow
Upgrades to the Clarisse workflow in version 5 help to make the application faster and simpler to use. All item attributes have been reorganised into logical hierarchical groups, for example, and only the most relevant attributes are now exposed by default. From there, users can manage the visibility of attribute groups in the Attribute Editor, which also displays visual hints to identify user-modified attributes.
Global variables can now be promoted and edited more quickly from the main user interface of the application. When linked to expressions, they can be used to create smart project templates that users operate with a simple set of controls. Users also can control the Clarisse evaluation engine, which tracks images for user modification, to prevent it from loading and processing any actual data.
Transmission Scattering Simulation in Clarisse 5
The render history has been improved with a new mode to manage render snapshots. Built-in light path expressions complementing existing AOVs have been added, and light path expressions have been improved to support Clarisse’s material lobes component, so that users can readily extract coating from diffuse or specular layers into individual AOVs to better adjust them in compositing.
VFX Reference Platform
Clarisse 5 is compliant with VFX Reference Platform Calendar Year 2020. All third party libraries shipped with the software now match the versions listed here, including the move to Python 3.7, which is the new default scripting engine in Clarisse 5. As this represents a maor change, it is still possible to run Clarisse 5 using the old Python 2.7 engine, and Isotropix will continue its support during the entire release cycle of Clarisse 5.
Clarisse 5 is available immediately. With the release of Clarisse 5, Isotropix is introducing and entirely new pricing schedule and special offers. More information is available here. www.isotropix.com