Software from the future: slate detection

By S Simmons. Filed in software from the future  |  
TOP digg

In the future, non-linear editing software might be able to recognize the unique characteristics of a camera slate in the frame and be able to mark or organize footage automatically.

6 comments to “Software from the future: slate detection”

  1. Comment by MarkB:

    A few years back I beta tested a piece of software which could pull timecode and keycode data from picture burnins and create clip metadata. It used a pattern recognition technology to pick up the data. Combine that concept with existing scene change detection built into some software already and you might be there, although with the inevitable 99.99% move to digital, record time metadata is the best option surely.

  2. Comment by Judith:

    I’ve often thought that something which could detect flashframes from telecined film would also be a useful break-up point.

    It’s not as universal as slates with the increase in digital, but perhaps another option to include within the same piece of software/code?

  3. Comment by Paul:

    This isn’t far enough as we get away from tapes and into RED and other data based acquisition then everything should be linked to the metadata and timecode of the footage – slate, and shot logs – there’s no need for image detection why shouldn’t the digi-slate send a signal to the camera wirelessly? :)

  4. Comment by editblog:

    Great comments all. I think that the more we move to digital acquisition then the more data that can be kept from shoot to post is great. But …. there’s still a lot of film shot and it isn’t going anywhere for a while. But even in digital acquisition people will still use slates so I can see it being of use there too.

  5. Comment by Rob:

    All digital cameras and sound recording devices should be linked with a two way wireless link to a central controller. This link would start and stop all “takes” and cause metadata to be recorded at the head of each take. All these recording devices would be synchronized and maintain sub-frame lock throughout the shooting day. All digital data, sound and picture, would be transferred into one location/computer where it would be brought together and perfectly synchronized fully automatically. Software would do voice recognition to transcribe the actual dialog, match it to the script and generate coverage reports along with the dailies.

    Avid already does the voice recognition and script match up now. Zaxcom already builds wireless mics that store digital recorded sound with a twelve hour capacity. They can also now receive a signal to jam-sync the timecode. That could be enough to do what I’m talking about as long as the timecode is kept unique for the whole shoot. Or they could receive and record some metadata to annotate scene/take/etc.

    I think Red should pioneer this idea of integrating a completely automated acquisition. I think the slate could be eliminated altogether.

    By the way, I’ve started snapping the sticks before announcing the slate data for audio. This lets me go through ever clip and trim the video to exactly when the sticks come together and the audio to the sound of the sticks. This lets me line up the audio and the video from the very start of each trimmed clip and still hear the audio slate and see the video slate. If I ever get an assistant, this will be easier to train.



  6. Comment by laurence zankowski:

    to quote Michael Goldman at Digital Content Producer

    his article with Director David Fincher on the movie set of ZODIAC

    Fincher adds,”We saved about 30 minutes a day by not having [physical] slates; plus, you almost never have to stop and reload. We probably reloaded about 30 times over the course of 120 days, at the most. [Actor] Robert Downey, Jr., said to me that he had never been on his feet so long on a set, because we rarely had to stop cameras”

    And then this article on editors guild magazine also about zodiac

    and this

    what I am getting at is, a complete camera generated “slate”, built into the camera processing unit, no people involved. some type of micro organic network connecting mics, lights and camera, easily done and since this is the 21 century the military has already done it, and quite likely years ago…. :)