Digital Domain Welcomes Gabrielle Gourrier as Executive Vice President of VFX
Digital Domain appointed Gabrielle Gourrier as Executive Vice President of VFX Business. In this role, Gabby will focus on expanding client partnerships and establishing strategic initiatives.
Chaos V-Ray 7 for 3ds Max brings Gaussian Splat support for fast photoreal environments, new ways to create interactive virtual tours and more realistic, controllable lighting to 3D rendering.
VFX Supervisor Morgan McDermott at Impossible Objects talks about making their immersive film for UFC’s debut at the Sphere, combining heros and symbols of Mexican independence with UFC legends.
Chaos puts Project Arena’s Virtual Production tools to the test in a new short film, achieving accurate ICVFX with real-time raytracing and compositing. Christopher Nichols shares insights.
Moving Picture Company (MPC)has appointed Lucinda Keeler as Head of Production for its London studio, bringing over 20 years of experience and leadership in the VFX industry.
REALTIME studio has launched a Virtual Production division, following its grant from Media City Immersive Technologies Innovation Hub to develop a proprietary Virtual Production tool.
ZibraVDB plugin for Virtual Production and CG studios delivers high compression rates and fast render times, making it possible to work with very large volumetric effects in real-time.
Maxon One 2025 updates Cinema 4D, Redshift, Red Giant and Cineware, and releases ZBrush for iPad, putting ZBrush sculpting tools into a mobile device with a new UI and touch controls.
Das Element asset library software version 2.1 has new video playback controls, hierarchy tree customisation for libraries, faster set-up processes and simpler element migration.
Autodesk returned to SIGGRAPH 2024 to show software updates that include generative AI and cloud workflows for 3D animation in Maya, production scheduling and clip retiming in Flame.
Shutterstock launched a cloud-based generative 3D API, built on NVIDIA Edify AI architecture, trained on licensed Shutterstock content, as a fast way to produce realistic 3D models with AI.
Freefolk has promoted Rob Sheridan to VFX Supervisor in their Film and Episodic division and Paul Wight is now the company’s first Chief Operating Officer.
SideFX Houdini Engine for UE4 and Houdini Engine for Unity are now available for commercial customers for free. Previously free for artists using Houdini Indie, this now gives commercial artists and studios the ability to deploy procedural assets created in Houdini to the UE4 and Unity real-time 3D platforms for use in game and XR development, virtual production and design visualisations.
Through the power of Houdini Engine, procedural tools and assets built in Houdini with custom-tailored interfaces can be brought into UE4 and Unity, and used by game artists whether they are familiar with Houdini or not. Houdini Engine does the processing work on the assets, and delivers the results back to the editor. These procedural assets work within the editor for content creation and are baked out before going to runtime.
The Houdini Engine plug-ins have been used on numerous shipped games including King’s Candy Crush, eXiin Ary and the Secret of Seasons, and Fishing Cactus Nanotale - Typing Chronicles.
Houdini Engine for Unity
The UE4 plug-in has been recently updated to a second version that has a redesigned core architecture and is more modular and lightweight. This version includes a new interface, support for world composition, blueprint support and a wealth of improvements and enhancements.
Customers can access up to 10 of these licenses per studio through the SideFX website, and request as many as they need through their account manager. For other host applications, such as Autodesk Maya, Autodesk 3DS Max, proprietary plug-ins, and for Batch processing on the farm, Houdini Engine licenses are available for rent. These licenses are also available as volume rentals for medium and large studios. Details are found here.
Houdini Engine Indie will continue to be free for limited commercial projects where the indie studio brings in less than $100K USD. www.sidefx.com
Artists can light multiple shots at the same time. Each shot can have global or precise per-shot changes.
Foundry’s Katana 4.0 has a new lighting mode and user experience, and updated USD capabilities. A new set of rendering workflows called Foresight is the main update in version 4.0. It comprises two new approaches, Multiple Simultaneous Renders and Networked Interactive Rendering, resulting in a fast, scalable feedback process that gives artists a chance to check their creative decisions ahead of final render.
Look Development
The look development architecture and UX in Katana 4.0, accessed through its tools and a shading node UI, allows look development to continue at the same time as shot production, whether artists are working on a single complex asset or a series of variations. Other tools can be used for procedural shot-based fixes or tweaks that all members of the production team can view and follow. Artists can also use Katana to drive and control look development with production metadata, so that teams can balance automation with manual work and achieve both efficiency and a high quality result.
A single Network Material Create node may be used to create multiple materials that share shading nodes. The ability to create complex groups of materials gives artists more freedom.
Artists interact with nodes built as part of a node graph system that can handle very complex shading networks. The workflows associated with these tools are, in turn, compatible with Katana’s complex pipeline workflows.
The Network Material Create node is able to create and manage multiple materials inside one node. Each material can have networks for any active renderer plugin, plus USD Preview Surface materials. using USD Preview Surface, artists can use Katana 4.0 to view materials, lights and shadows in the Hydra Viewer without rendering.
The Viewer is Katana’s viewport driven by Pixar’s USD Hydra system that was designed to work with modern graphics cards and handle massive scale. Due to a rewrite of the bridge that connects Katana to Hydra and the HdStorm render delegate, which aggregates and shares GPU render resources, users have better viewer performance and a more robust interpretation of USD information.
Here is an example of the PBR support available through HdStorm in Katana’s Hydra powered viewport.
Using Katana’s Network Material Edit node, look development and lighting artists make procedural edits to their existing network materials. They can customise a material from a material library for a specific purpose, for example, or make procedural shot edits to approved art direction. The Network Material Edit node’s UI visualises the full network, including materials designed in other packages and imported by USD, plus any edits, while each criteria is filterable.
Apart from using the Network Material Create node to create and manage multiple materials, its workflows include storing materials parameters for the surface, displacement or light shaders for multiple renderers at the same time, and constructing Network Materials using a combination of texture maps and renderer procedural nodes in a specialised UI. Nodes can be shared between multiple shading networks. You can develop looks for variations of assets using Katana’s parent and child material toolset, and place complex shading networks inside Katana’s Shading Groups to simplify sharing and reuse.
Lighting Workflows for Artists
This animation is lit from multiple camera angles, in this case showing multiple shots, but it could also be multiple frames of the same shot. Each change can be viewed by all the possible outcomes that affect it, which improves continuity and reduces revision cycles.
Using the lighting workflows in Katana 4.0, artists can create, place and edit lights in a way similar to the way cinematographers work live on-set. The UI was built for speed and ease of use so that artists and teams can respond directly to art direction. Users can work right on top of the image during a live render session with either gesture based controls, a mouse or a Wacom drawing tablet. Katana’s renderer plugins draw rendered pixels on top of the GL pixel information from the Hydra Viewport.
A major component of the new digital cinematography workflows is the Monitor Layer, used to view the output of the renderer plugin directly in the viewer. Objects can be selected directly from the image using image-based selection tools.
The artist gets to create and edit lights by interacting directly with the image and scene objects – that is, controlling lights based on where they illuminate, or where the light itself should be positioned. Like a cinematographer, you can think of the environment both in terms of practical light sources such as lamps, and in terms of lights that support the scene in reference to the practical lights.
The interactive HUDs from the Lighting Tools allow direct control of any light or group of lights directly from the viewer. The artist can work full screen directly on their image.
That means the usual trial and error process spent adjusting numerical values or 3D transforms is no longer necessary. Instead, you can work in Katana’s viewer with a heads-up display, simplified to focus on controlling properties such as intensity, exposure and colour. Users select which colour and numerical controls are shown in the HUD. You can place a small HUD on each light and set up a HUD spreadsheet through your own selected lights. This kind of work is what the viewer is designed for, using as much or as little on-screen control as you need to manage simple or complex lighting scenarios, straight from the viewer.
The idea is for artists to spend less time in the node graph and more time lighting, using the lighting tools to do more of the common tasks in the viewer – such as renaming, deleting and adopting lights for editing. Each of the controls is available for any of the available GafferThree nodes in the active node graph – which are controlled directly from the viewer as well. Sequence-based lighting edits can be managed through multiple GafferThree nodes.
Lighting Production Tools
Beyond the hands-on artist’s side of Katana’s lighting mode, the tools are designed for efficiency, so that fewer artists are needed to manage large numbers of shots at high quality in the least time. Artists can use deferred loading of scene information, procedural workflows, collaborative tools, viewer and live rendering feedback to speed up their work.
In this environment in GL, with an asset drawn with ray tracing the artist has fine grained control over how they interact with their scene.
Light creation and editing are now both handled in Katana’s GafferThree node, interacting with single lights directly or controlling multiple lights at the same time via the Template Materials. You can edit previously created lights procedurally, allowing lights to have multiple looks across a sequence of shots. By referencing lighting from a library, you can make specific updates without losing the ability to inherit changes made to the lighting library.
Katana interactively communicates all rendering edits as individual changes to the user’s rendering plugin, allowing the software to access very specific information. Instead of coping with crashes, artists can use Interactive Render Filters to override render settings for better performance, without changing the settings for the final render.
Through Katana’s configurable UI, lighting artists configure each session to make the most of the current task and project, interacting with the lights and shadows of the Hydra Viewport, in the rendered image in the Monitor Layer or in the Monitor Tab. Feedback on the full history of current and past renders can be viewed in the new Catalog system UI (more about Catalog below).
Foresight Rendering
Katana’s scalable interactive rendering and a new set of APIs now make it possible to simultaneously render multiple images as artists work across shots, frames, assets, asset variations and other tasks from within one Katana project file. They can multitask while waiting for renders to deliver feedback on art direction, reducing the iteration cycle time. Using these Multiple Simultaneous Interactive Renders, an artist can also make one choice that affects multiple shots or assets from a single control, and validate multiple outcomes simultaneously.
These toys all share a common vinyl material. The material nodes that make each toy unique only change the texture maps that are applied to the model, while the properties that make it look like vinyl come from a common parent material. In the Katana Foresight workflow, vinyl looks can be changed from one material node and viewed on all the assets that use it at the same time, as multiple live or preview renders.
Machines can also be networked for faster renders and scalable feedback. Because rendering requires computational power, Networked Interactive Rendering has been developed for artists to use networked machines, other than the one they’re working on, to facilitate Multiple Simultaneous Interactive Renders for a single Katana session.
Accessing the extra power makes traditional test rendering via batch farm renders unnecessary. Without affecting their own workstations, artists can see and respond to a render’s interactive progress in the Katana UI instead of waiting for a finished frame.
The farm rendering APIs in Katana support connections to render farm management applications like Deadline, Qube, Tractor or custom in-house tools. Users can then deploy existing render farm resources in dedicated pools for interactive renders during the day, and return them to the pool for final frame renders at night.
Katana 4.0 ships with tools that serve as examples for studios intending to customise their use of Katana Foresight workflows or connect them to a different render farm management application. Katana Queue is a small render queue system built to manage renders on an artist’s local machine, or a set of networked machines. You can also put your machine to work when multitasking with a scalable rendering tool controlled through a new process control interface called the Katana Queue Tab.
Creating multiple simultaneous renders poses the question of how to view them. The Catalog is a UI for multiple renders. It can show thumbnails at a user defined size, displayed as a vertical strip of thumbnails, and update while the images are rendered. The Graph State Variables and Interactive Render Filters are listed as combined or separate columns, to keep track of what each render shows.
Katana’s Monitor and Catalog allow an artist to see more than one render at a time. The Monitor can show two large images side by side or one on top of the other. The Catalog can now show larger thumbnails that update dynamically as the render progresses.
From the Catalog, an artist can choose two images to show at a larger resolution in the Monitor Tab. Displayed side-by-side or one on top of the other, or in a wipe comparison, panning and zoom can be synced or independent. Artists can compare two images or two parts of the same image while they work. Meanwhile, the front buffer always drives the Monitor Layer in the Viewer which can match the scene state and allow you to use Katana 4.0’s lighting environment.
Collaboration
Katana is designed for look development and lighting teams to work together. In the node graph workflows, for example, artists define a series of ordered steps that can be shared with other team members, or reused for other parts of the project. With Live Groups, users group and publish parts of the node graph and save them to disk for sharing or reuse. Similar to a referencing system, you can manage versions of these groups, and publish changes to them wherever they are used.
Without writing any C++ code, TDs can write custom tools to perform simple tasks, up to complex sets of action. Specific tools may be designed as macros via Katana’s OpScript node. Macros are for packaging up a set of nodes into a shareable tool, without scripting. By only exposing selected controls to, the user is left with a simpler, straightforward interface. You can also create tools with SuperTools with a custom UI that can dynamically create nodes. The nodes inside are created and controlled using Python scripts, presented as a single new node with a customisable interface.
Pipeline Development
Katana includes further functionality to support pipeline development. The software’s USD capabilities continue to expand. As well as the Hydra viewport described above, they now include USD-import and export nodes with source code that is open source, allowing teams to use USD in production with Katana.
The materials and lighting of the lion can be adjusted while viewing the outcome from multiple camera angles during a live render. The distance from the camera, the viewed angle all have an impact on how the artist would adjust lights or material.
OpenColorIO (OCIO) colour management is a part of all image-related operations, making certain that colour spaces are correct. The software supports Python, Lua, C and C++ scripting and programming languages, plus libraries such as ILM’s Imath. The software also includes APIs for connecting to asset management tools, for example to make your own custom tools with logic, custom UI and dynamic properties. Other APIs allow you to add your own render engine, and to make complex connections between Katana and render farms.
Katana's rendering system is compatible with existing renderer plugins including V-Ray, Arnold, 3Delight, RenderMan and Redshift. All use the same API, which can power custom renderer plugins. The 3Delight plugin is open sourced and can be used as a production reference.
Katana ships with a rendering plugin from 3Delight’s developers, and supports interactive live rendering. Live rendering responds to any changes in the scene made to lights, shaders, cameras, geometry, volumes, materials or global settings. Special workflows like light mixing adjust lighting interactively on finished frames, while the edits are fed back directly to lights or groups of lights in the GafferThree lighting tools. The OSL-based shading engine is compatible with both Katana and Maya and allows direct transfer of look development files between them. www.foundry.com
Autodesk’s Flame 2021.2 update integrates the finishing environment to make it more flexible and customisable, with better performance. A new object-based keyer identifies and generates a matte for major objects in a bounding box. This update also increases speed and improves ease of use within Flame’s Effects environment. Users have a new in-context node Search tool, and wider media format compatibility.
Like earlier releases of semantic keying in Flame for the sky and part extraction from the human body, head and face, the new Salient Keyer uses object recognition machine learning to identify and generate a matte for the most prominent, or ‘salient’, object in a bounding box. However, this new keyer is not object-specific, but helps to isolate objects in an image. The bounding box can be animated, and reframing will produce good results for recognisable objects.
Salient keyer
The speed of interaction for navigating and scrubbing larger, more complex timelines is now much quicker. The scope of a timeline search preset can be determined in advance, and machine learning models now load on-demand. Caching is improved – the caches persist so you no longer have to recreate them after re-opening a project.
Storyboard thumbnail generation is optimised. Storyboard thumbnails are now generated asynchronously so that users no longer have to wait for all segment thumbnails to appear before selecting the previous or next segment. This improves shot-to-shot navigation. Starting a session by generating thumbnails first also speeds up navigation, and thumbnails are cached, persisting when the application is restarted. However, modifying a Timeline FX requires the thumbnail to be regenerated.
A new Single / Dual panel UI layout is also now available, displaying all the tools at the artist’s disposal. The single view shows one menu at a time, whereas the dual view displays two columns showing twice the amount of information.
A new Search tool has been added to allow users to access and add nodes faster in the Batch, BFX and Action Schematics, as well as in the Image node and Gmask Tracer tools. This means Flame artists can quickly search through all nodes that can be added and attach them to schematics as they see fit.
Searching colour
Any tools can be added to a schematic from any node bin. The new Search tool allows artists to add regular nodes in Batch, Action, OFX plugins and Matchboxes, and at the same time other community-generated Matchbox tools will also appear in the list. A new preference panel for Search allows artists to control which nodes they see and which are hidden based on a favourites, tagging and hiding system.
Compatibility with ARRI, Red, Codex X2X HDE, Pixspan and Sony XAVC formats have been updated with this release. Also, content created in Flame Family software can now be exported in Portable Network Graphics (PNG) format. www.autodesk.com
Angus Kneale, former Chief Creative Officer and Co-founder of The Mill New York, has launched Preymaker, a collective of creatives, technologists and producers. To innovate and create content for brands and companies, they use a custom cloud-based platform created with Amazon Web Services (AWS). Andus’ partners in this venture are Melanie Wickham, former Executive Producer and Director of Production and Verity Grantham, former Chief of Staff, both from The Mill New York as well.
Angus said. “Mel, Verity and I are proud to have had a hand in The Mill’s legacy of work, calibre of artists and producers and the creative culture that inspired and supported them. We’re continuing that spirit of innovation at Preymaker with our focus on creativity, technical development and people.
Cloud Native
“Our team of artists, producers and technologists collaborate globally, entirely in the cloud, making Preymaker one of the first content makers that is 100% cloud native, which means the team can use up to date systems and software at scale, as soon as it is available. This allows continuous experimentation and innovation, which is at the heart of Preymaker’s mission to create exceptional work with our clients and partners.”
The Preymaker name comes from Angus' working farm in upstate New York, which features orchards and apiaries. The surrounding area is a wild landscape of large trees, waterfalls and wildlife. Angus said, “We use it as a metaphor for what we do, creating that same spirit of wonder, magic and awe for our clients.”
Preymaker’s home base is a production studio in SoHo, New York City, serving as a central hub and connection for a growing staff who work both remotely and on-premises.
Background
After originally working at The Mill London, Angus co-founded The Mill in the US, transforming it from a London-based boutique to a multi-national facility. He was instrumental in creating significant IP such as The Blackbird, which was a Cannes Innovation Gold Lion winner. An electric car that transforms to match the dimensions of almost any car, it can also be programmed to replicate typical driving characteristics such as acceleration curves and gearing shifts. Meanwhile, it captures footage of the surrounding environment through its camera array and stabilisation unit.
He worked with his team to bring Mascot to market, a proprietary real-time animation system that enables CGI characters to be performed and animated live using a combination of Unreal game-engine technology and motion sensors.
He also directed PETA’s ‘98% Human’ spot that condemns the entertainment industry for its abuse of animal actors and advocates the alternative potential of using lifelike computer-generated creatures. The spot received a Cannes Gold award and a standing ovation led by Dr Jane Goodall at the Great Apes Summit.
Angus has been working most recently with teams of PhD researchers using computer vision and machine learning to create and develop new systems for advertising, film and media.
The Team
Over the past 20 years, Melanie Wickham has held senior production roles at creative studios including The Mill, Absolute Post and Animal Logic. “Preymaker is an opportunity to create a community where there are no boundaries, which extends to projects of varied media and disciplines we undertake, aspirations of our team and expectations of our clients.”
Verity Grantham’s experience includes films and commercials working with Michel Gondry, Fredrik Bond, Nicolai Fuglsig, Daniel Wolfe, Martin de Thurah, Jim Jenkins, Jonathan Glazer, Anthony Minghella and Stanley Kubrick. “Our virtual, cloud-based capabilities, which we began to develop well before the pandemic shut everything down, are serving us and our clients well. Technology married strategically and imaginatively to creative is the way forward and the key to success for us and our clients.”
Preymaker has simplified the processes on the company’s cloud-based platform for clients for ease of use and accessibility. The team has kicked off its first projects collaborating with McCann, BBDO, 72andSunny and Johannes Leonardo, and directors Peter Thwaites, Daniel Wolfe, Lance Accord and David Gordon Green. preymaker.com
The NVIDIA Omniverse platform is an RTX-based 3D simulation and collaboration platform capable of simulating photoreal 3D objects and scenes in real time. NVIDIA launched its open beta stage at the virtual GTC event this week.
Using the platform, remote teams can collaborate simultaneously on projects in a way similar to editing an online document. Typical users and applications would be architects iterating on 3D building design, animators revising 3D scenes, and engineers collaborating on autonomous vehicles.
Artists and engineers working in robotics, automotive, architecture, engineering and construction, manufacturing and M&E all need to continuously improve their creative processes and animation pipelines over time. The Omniverse Platform acts as a hub, where new capabilities are exposed as micro-services to connected clients and applications. It aims for universal interoperability across different applications and 3D systems vendors, and its real-time scene updates are based on open-standards and protocols.
Pixar’s USD and NVIDIA’s MDL
The platform supports real-time photorealistic rendering, physics, materials and interactive workflows between 3D software packages. It is based on Pixar’s Universal Scene Description (USD), a format for universal file interchange between 3D applications, directly sharing most aspects of a 3D scene while maintaining application-specific data.
The USD scene representation has an API allowing complex property inheritance, instancing, layering, loading on demand and other features. Omniverse uses USD for interchange through its central database service, called Nucleus (see below).
Materials in Omniverse are represented by NVIDIA’s open-source MDL (Material Definition Library). NVIDIA has developed a custom schema in USD to represent material assignments and parameters, preserving these during interchange between different application-specific material definitions. This standard definition enables materials to look similar if not identical across multiple applications.
USD structure allows you to only relay the changes you have made to objects, environments and other design elements within the collaborative scene, which means edits are efficiently communicated between applications while maintaining overall integrity.
Inside Omniverse – Tools and Services
On top of Omniverse’s USD / MDL foundation, the plaform has five main components – Omniverse Connect, Nucleus, Kit, Simulation and RTX. These components, plus the connected third party content creation (DCC) tools and other connected Omniverse microservices, make up the whole Omniverse system.
Omniverse Nucleus has a set of basic services that various client applications, renderers and microservices use to share and modify representations of virtual worlds. Nucleus works through a publish/subscribe model – that is, Omniverse clients can publish modifications to digital assets and virtual worlds to the Nucleus Database (DB), or subscribe to their changes. Changes are transmitted in real-time between connected applications.
Omniverse Connect libraries are distributed via plugins that client applications use to connect to Nucleus and to publish and subscribe to individual assets and complete worlds. Once synchronised, a software plugin will use the Omniverse Connect libraries to apply updates from outside and publish changes generated from inside – as necessary.
As the application makes changes to its USD representation of the scene, Omniverse Connect keeps track of the differences and publishes them to Nucleus for distribution to subscribers.
Omniverse Kit is a toolkit for building native Omniverse applications and microservices. It is built on a base framework with functionality accessed through light-weight extensions that are plugins authored in Python or C++. A flexible, extensible development platform for apps and microservices, Kit can be run headless or with a UI that can be customised with a UI engine.
The Extensions are building blocks that users assemble in many ways to create different types of Applications. They include RTX Viewport Extensions, Content Browser Extensions, USD Widgets and Window Extensions and the Omniverse UI. As they are all written in Python, they are very customisable and therefore the catalogue of extensions is expected to grow. They are supplied with complete source code to help developers create, add and modify tools and workflows.
In the Omniverse Pipeline, DCC applications, plus those the user has built using Omniverse Kit, can all be exported to the USD file format and have support for MDL materials. Using Omniverse Connector plugins, Omniverse portals are created between these apps and the Nucleus Database. The Nucleus server also supplies functionality as headless micro-services, and delivers rendered results to different visualisation clients - including VR headsets and AR devices.
Simulation in Omniverse is done through NVIDIA plug-ins or microservices for Omniverse Kit. Currently, Omniverse physics includes rigid body dynamics, destruction and fracture, vehicle dynamics and fluid dynamics. One of the first available simulation tools is NVIDIA’s PhysX, the open-source physical simulator used in computer games. The objects involved in the simulation, their properties, constraints and so on are specified in a custom USD schema. Kit has tools for editing the simulation set-up, start/stop and adjusting parameters.
Omniverse supports renderers that comply with Pixar’s Hydra architecture. One of these is the new Omniverse RTX viewport. RTX uses hardware RT cores in Turing and upcoming NVIDIA architectures for real-time ray tracing and path-tracing. Because the renderer doesn’t rasterise before ray-tracing, very large scenes can be handled in real-time. It has two modes – traditional ray tracing for fast performance and path tracing for high quality results.
Omniverse RTX natively supports multiple GPUs in a single system and will soon support interactive rendering – in which the rendered image updates in real time as changes are made in your scene – across multiple systems.
Early Access and Software Partners
The open beta of Omniverse follows a one-year early access program in which Ericsson, Foster + Partners, ILM and over 40 other companies – and as many as 400 individual creators and developers – have been evaluating the platform and sending reactions and ideas to the NVIDIA engineering team.
At this time, NVIDIA Omniverse connects to a range of content creation applications, and NVIDIA has created demos, called Aps and Experiences, to show how it works in the different workflows. Apps are built using Omniverse Kit and serve as a starting point for developers learning to create their own apps. They will continually gain new features and capabilities. Experiences, on the other hand, are packages containing all the components and extensions needed to address specific workflows.
Early adopters of NVIDIA Omniverse so far include architectural design and engineering firm Foster + Partners in the UK who is using Omniverse to help with data exchange workflows and collaborative design processes. Woods Bagot, an architectural and consulting practice, is working with Omniverse to set up a hybrid cloud workflow for the design of complex models and visualisations of buildings, and Ericsson telecommunications is using real-world city models in Omniverse to simulate and visualise the signal propagation of its 5G network deployment.
Omniverse has support from software companies including Adobe, Autodesk, Bentley Systems, Robert McNeel & Associates and SideFX. Blender is working with NVIDIA to add USD capabilities facilitating Omniverse integration with its software. The goal is to allow artists and designers to use the collaborative functionality of Omniverse while working with their preferred applications.
Autodesk’s senior vice president for Design and Creation Products Amy Bunszel said, “Projects and teams are becoming more complex and we are confident Autodesk users from all industries will respond to Omniverse’s ability to create a more collaborative and immersive experience. This is what the future of work looks like.”
Interested persons can sign up for the Omniverse open beta program here. It will be available for download in the coming months. www.nvidia.com