

#Nuke 13 beta full#
SyncReview is now out of beta and allows you to full synchronised dailies remotely, while updating the edits and annotating clips. USD support has now been added to all Nuke’s Camera, ReadGeo, Lights and Axis nodes. You can now see materials, reflections and lights in the preview looking a lot more accurate.

Full list of features HydraĪ substantial improvement to the 3D system is the introduction of the Hydra renderer for your viewport previews. Most importantly, you can also have a convenient second monitor setup now without buying a separate video I/O, which is good news for anyone running Nuke on their own machines at home. What the Foundry has added in this release, is additional options for separate colour transforms between the Viewer in Nuke and the monitor and also the ability to synchronise your Viewer and monitor’s gamma/gain settings by a simple toggle of a button if you do want it to be the same. If you use a second monitor with a video I/O card, then setting it up as a full-screen monitor has been an option in the past as well. In addition to that, they are replacing the Nuke’s 3D Viewport with Hydra, which is GPU accelerated and gives you a much more accurate view of materials, reflections and lighting while you’re setting up your 3D scene.

The whole 3D space in Nuke is being reworked with the first stages now adding USD support to the Camera, Axis, Lights and ReadGeo nodes. For the moment, you cannot change the underlying model that the Cop圜at uses, but there is talk that in the future you could import your own PyTorch models to it, however, I think the current approach is already an amazing way to get people into deep learning. This is going to be the first feature that I will want to check out, since I’ve trained neural networks during my UG and MSc studies, so watch this space for any examples soon. You can also expect users to start uploading their own Cop圜at experiments to Github and Nukepedia as they get more practice on it. They are already including a Deblur node to remove motion-blur from images and an Upscale node to make images twice as big. Or greenscreen footage and despilled images, or defocused images and in focus images, snowy scenes / summer scenes, day-time / night-time, cloudy scy / sunny sky, famous actor with a moustache / famous actor without a moustache…Īs people get more and more familiar with training their own neural networks, the Foundry might release a few more pre-trained models. Or give it a grey shaded CG render and a normals pass, to make your own “normals-from-luma” tool. For example, you could take clips and add noise to them and then show it to the Cop圜at the other way around, telling it that noisy images are input and it should learn to denoise them for output. So, it isn’t going to give you a new Roto nodes with shapes as an output, but the image-to-image principle itself can still be used for many-many applications. The important thing to remember is that this neural network learns an image-to-image transformation. The Inference plugin then applies that image-to-image transformation to a the rest of the image sequence or to a brand new sequence. Cop圜at node trains a neural network model from a few input-output image pairs. The Inference plugin can then apply the effect to the rest of your image sequence or even to brand new sequences.
#Nuke 13 beta license#
cat file (you need NukeX license to train the Cop圜at), which you can then import to the Inference plugin (available in every version of Nuke). You save the learning result as a separate.
#Nuke 13 beta how to#
roto a person using the Roto node) and you give the outputs (the matte images) and inputs (the original images) to the Cop圜at node, which then tries to learn how to create similar outputs from new inputs. It works through an image-to-image supervised training workflow, where the idea is that you create a few frames using “old-school” compositing methods (e.g. Together they allow you train a neural network to create your own AI powered effects in NukeX. Nuke 13.0 introduces two new nodes the Cop圜at and Inference plugin. What I think are the most interesting features: Artificial Intelligence Research
