Gary Adcock is a veteran filmmaker, photographer, technology guru and incredibly knowledgeable resource on media and entertainment workflows, camera and production technologies, and how it all relates to post-production and delivery.
He’s worked with big-time brands from Adobe, to CNN, to McDonald’s, HBO, NASA and MLB, and as a noteworthy influencer when it comes to cinematography, on-set production, and post-production technologies. Heck, he’s also a regular contributor to the MASV blog. Read his article about remote production.
MASV recently caught up with Gary for an engaging chat on digital data and photogrammetry – which allows the creation of virtual models and environments by recording and measuring the physical world – along with its implications toward virtual production and society at large.
This interview has been edited for length and clarity.
Jump to section:
- What is Photogrammetry?
- How does Photogrammetry work?
- Photos vs Videos: Which is Better for Photogrammetry?
- Is Photogrammetry an Extension of Virtual Production?
- How Prevalent is Photogrammetry in Virtual Production?
- How is Photogrammetry Used in Virtual Production?
- When did Photogrammetry Really Begin to Take Off?
- Has Photogrammetry and Virtual Production Changed Who Needs to be On-Set?
- What Does the Future Hold for Digital Data and Filmmaking?
Large File Transfer for Filmmakers
Use MASV to send large media files from set to editors, VFX artists, colorists, and more.
What is Photogrammetry?
You take a series of pictures together to extrapolate an object, and if you take enough photographs around it, you’ll have a complete composite of everything. And then you can actually build something to scale.
NASA uses photogrammetry to map planets and things like that. And as we get into this world of virtual production with game information and 3D models and live camera data, we’re actually doing exactly the same thing. We’re taking a data stream, an audio stream, and a video stream and marrying them all together.
Photogrammetry can use Lidar, which is a light distancing imaging system that’s been used for mapping spaces and all kinds of stuff. It used to be incredibly expensive but now my iPhone has Lidar in it. Apple’s starting to allow you to start capturing the environment around you, and have it grow into a 3D world for virtualization, for gaming, for any of that stuff. That kind of technology is giving people like us the power to do these kinds of tricks.
You can actually download multiple types of free software for Mac and Windows that allows you to stitch photos together and make a 3D model. Now, the modeling in photogrammetry is independent from the Lidar, which is added information.
There are also some really cool apps for this kind of scanning that help you build models. The problem with it is they get really large, really fast.
I tried to scan my kitchen and quickly realized that I had used up 200 gigs of space on my phone.
Send Up To 15 TB With MASV
MASV can deliver up to 15 TB with a single file over the cloud, fast.
How Does Photogrammetry Work?
You basically do a photo mapping.
You gather a series of photos like you would write a contact sheet, and then the software needs different controls on how to do it. For example, whether you go across first, or up and down first, and whether the first row goes up and down. There are rules on how the individual software sees the sequence, but it’s not any different than how you create VR.
Static VR images map the same way across multiple cameras. And that’s been the interesting side about this, is that we’re now giving individual users the capability we hadn’t thought about before. It gives a lot of power to the users, right?
But this also means that if you’re creating virtual environments, I could go in and shoot through photogrammetry and have a background and then use that later as a location. A simple explanation of that is a show called Station Eleven on HBO Max. They went in and did photogrammetry of their sets. So that if and when something came up, they would have the capability to put an actor in front of a background anywhere in the world and drop this background in. They could shoot the actor on a green screen and generate the background behind them.
With virtual production, you can actually put that up on a wall in a motion environment and actually show how it works.
One of the manufacturers I’m working with had a screen that was three times finer than anything on the market. It had a resolution about the same as an 8K TV. So it’s a 4K, 10-bit wall, and more bits makes for a better picture, makes for longer tonal ranges, and everything else. And instead of simply showing images on the screen, we can show a 3D model inside of Unreal Engine. And that’s the cool part about it. It’s not even a physical location. And you can’t even tell where the wall stops and the stage begins, because the melding is so good.
Source: Art of LED Wall Virtual Production, fxguide
Photos vs Videos: Which is Better for Photogrammetry?
A still image is going to have a much greater resolution than a video. A 4K image is 4000 pixels across, but most of our still cameras are six, eight, 10,000 pixels. It’s not uncommon to have a still camera that’s much higher resolution.
But ultimately it’s about being able to capture a large number of images to stitch together, and apply that information. And it’s all math. It does it intelligently. I’ve been working on this stuff for years – I once used QuickTime VR to produce panoramas of Taco Bell interiors for a signage project. Rather than renting out three restaurants and shooting them, which is expensive, we made a virtual Taco Bell. And we could just drop everything in at scale – we could preview to the restaurant owners how each graphic would look in the restaurant. We literally went out and shot the interior of the store, and then shot the exterior of the store as a resolution map. And it was all in still. 🌮
This process is now far faster because of digital. Believe it or not, that is actually from the location data in the cameras. The camera knows and can determine position, order of shots, and everything else. And it marries the image based on sequential numbers. When I started doing this, we were still using film to do it. And it’s a sidebar to all of this. But it really shows the depth with which technologies around us marry everything we know.
Is Photogrammetry an Extension of Virtual Production?
It’s the application of photogrammetry.
People think ‘I could just put a picture up there,’ but the picture doesn’t adjust for the parallax on how objects move, and how the background moves against me in the foreground. It’s different than if it was a static picture. The photogrammetry allows you to change the perspective of objects in the background as you move, so you can walk and turn and do things. And that gives that extended level of realism to what’s going to be done in the virtual world.
How Prevalent is Photogrammetry in Virtual Production?
There are six virtual stages built or in the process of being built in Chicago. There’s probably around 30 in L.A., and around 10 at least in the Toronto metro area. The largest one in Montreal is for Star Trek: Discovery. There’s one there that I think is around 110 feet long.
How is Photogrammetry Used in Virtual Production?
Traditionally it’s a part of the VFX department. That would be somebody who understands how to shoot background plates and background information. Those are the kinds of people you want doing that. The other people I find doing it are people who have a lot of experience in architectural photography.
There are lots of these kinds of technologies in still photography – there’s image stacking, where you take a series of images to be able to hold focus across the range or extend the focus of an object. Or, in the case of the James Webb Space Telescope, image stacking by applying 27 different photos of the same object on different mirrors using different information. And it’s combining those 27 mirrors into a single image that gives infrared, ultraviolet, transmissive light, and all those kinds of things all at the same time. The technology for photogrammetry is used in a lot of different ways.
Like I said, it originated with NASA. They used this to map the moon. And that’s why when you see the old NASA photos, particularly the stuff on the moon, you’ll always see these little X’s in the frames everywhere. Those were used to know how big the astronaut is, so they can determine how far he is, how high he is in the environment, because they’ve mapped all that out.
When man went to the moon in 1969, when this was all still filmed, there was no digital. They used that grid marker on the focal plane, which allowed them to produce the content at the proper scale. So we’re working with stuff that happened 50 years ago, and it’s now being evolved into what we’re doing in the virtual world.
File Transfer for Large Data Demands
Send and receive hi-res photos, videos, and audio with MASV.
When did Photogrammetry Really Begin to Take Off?
It’s been used for a long time in architectural renderings, and site locations for architecture. It’s been used in architecture for at least 30 or 40 years. And it’s been used in Hollywood for a number of years for background plates.
In automotive, there’s things like the Blackbird car. It’s a car with 37 cameras mounted around it, and it’s designed to photograph the reflections in an automobile. It has a constantly changing wheelbase, you can adjust the width of it, it’s completely adjustable with different kinds of tires of different sizes on it. You can put SUV tires on it. So they drive around and then drop the model on it. So they get real tires and wheels, but everything on top of that is a fake car. They generate the reflections on the car itself. That kind of technology has been used for a long time, particularly in automotive and architectural and things like that.
This technology just keeps being accelerated because the computers and the tools we’re working with have become so powerful. And you can now finally capture enough data to use that data in a functional fashion that can enhance your project or your life. And it’s that ability to hold that data, and maintain that data and the files and all the things associated with it, that make it such a big deal.
And in Hollywood, what was the first thing that ever did that?
The answer is, of course, The Matrix and its infamous bullet time sequence.
And believe it or not, this was developed originally not for The Matrix, but from a Michael Jordan commercial. The person who became the unit production manager on The Matrix had worked on that Michael Jordan commercial, and they realized that this would be a really cool thing to do. And they figured out how to do it. That’s the kind of thing you don’t think about – something that came out of somebody’s idea to do a really cool shot for a commercial turned into bullet time for The Matrix, which is now used on The Mandalorian.
Has Photogrammetry and Virtual Production Changed Who Needs to be On-Set?
Definitely. In the old world a Visual Effects Supervisor would stop in and talk to the crew a little bit, and maybe produce some of the second unit stuff or some of the pickup stuff. And now with shows like The Mandalorian, Star Trek:Discovery, and Lost in Space, — shows where there’s a lot of real heavy VFX stuff, that’s now a full-time position on set. That’s actually become really common.
The other thing is, and I think it’s really important, is that the one crew position that’s never defined in virtual production is an IT manager. There’s never been a need for an IT guy on set before. The Digital Imaging Technician (DIT) usually handled it. Or, one of the second assistants; maybe whoever was putting up the video village had to deal with all the IT.
The reality now is that technology is such on most sets, that you need a frequency wrangler – somebody who knows what frequency everything is on, and where’s all the loose electromagnetic radiation. And it’s interesting because you don’t think about it that way. That photogrammetry and virtual production, in general, actually creates new positions.
Photo by Jason Leung on Unsplash
What Does the Future Hold for Digital Data and Filmmaking?
It’s constantly changing.
SMPTE 2110 is going to start allowing for localized metadata to come into your video stream more so than it does now. I mean true, actual localization, based on your physical location even more so than it does now.
And now we’re talking about being able to photograph and record environments with such a high caliber of detail that it can be reduced and reproduced as a simulation. We’ve got the capability to do a Holodeck; we’ve got glasses that are giving you real-time information because the data stream allows it, and the data stream is extrapolated from the environment around you.
And while they’re not exactly the same thing, how you process and move that metadata, whether it’s incoming or outgoing, it’s all the same. It’s being able to handle data in a clean, precise fashion that makes it usable for all kinds of other avenues, not just what you think it does now.
I mean, we keep talking about virtual production. But there are times when virtual productions need to be married to a traditional VFX workflow, and you’re only using the virtual stuff for the reflections and everything on it. They can put panels up to put reflections into it that mimic the real world and save the visual effects costs, but that doesn’t mean they can replace the background all the time, it doesn’t mean that they can get far enough away from the background yet.
But it’s all about data, how it moves, and the more data that you can capture at the source. Whether it’s live cameras with the actors; photogrammetry on a set that you might need to use later; or an artificial environment being constructed in in an Unreal.
The power of all of this is that data is able to be modified and manipulated in a way to make it look more realistic to our eye.
Can you touch more on the localization of data in the input of cameras?
As broadcast moves forward, we have to move away from the existing specifications for how we deliver video and audio and data. The move is SMPTE 2110 and its various parts. What they do is define the capability of generating independent pipelines to the end user for video, audio and data. And data can be anything. It’s not just a timestamp.
It’s in the original Blade Runner movie – you walk by something that would automatically start talking to you. You walk by and something interacts with you directly, as you, because it knows that you’re there. That’s where this is going.
But this ability to separate the video from the audio from the data means a lot of different things in our world. The Precision Time Protocol (PTP) is going to allow us to be able to disseminate the data from multiple different locations to a single endpoint at the correct time. And because video takes longer to move than audio or data, you can marry these at different levels and different speeds based on what’s going on.
Better Remote Collaboration
At the intersection of photogrammetry, virtual production, and how we experience content is data. As hardware and software becomes more sophisticated, the amount of data available to us grows in size. This means higher resolutions, more immersive experiences, and more ways to personalize media to the individual viewer. However, this also means an increase in file sizes of digital media files.
MASV is a file sharing service that lets media professionals quickly transfer terabytes of data to anyone in the world over the cloud. We consistently deliver files in record time thanks to our network of 150 global data centers and zero throttling on transfer speeds. All files shared through MASV are encrypted and backed by a Trusted Partner Network assessment; the industry standard for content protection.
Sign up for MASV today and get 20GBs for free to transfer your data.
MASV File Transfer
Get 20 GB to use with the fastest, large file transfer service available today, MASV.