Performance improvements for iShowU Studio

Overview

I’m pleased to announce the release of iShowU Studio 1.3.0 into beta.  This build contains performance improvements and a few bug fixes. Notably, the appearance of the odd black frame when scrubbing across the timeline should now be gone.

As part of this change, overall performance of iShowU Studio improves as well.  You can get the beta by enabling “also check for beta updates” in Preferences and then checking for a software update from within iShowU Studio itself.

note: AppStore users – we will submit a build to Apple when we’re sure there’s nothing crazy-wrong with the 1.3.x build.

iShowU_Studio_Update_Preferences

The Details

At the time of release, Studio was extremely conservative with regard to what it would load into memory. This meant that as you scrubbed to a region that contained a new video, that video had to be loaded and initialised before anything could be shown. This is why the black frames existed. There was a very small amount of time (typically about a tenth of a second) where no video frames were available, while the media was being loaded.

From 1.3.0 onwards, Studio preloads project media up front. This results in a slight delay while loading the project, but from then on it’s much smoother / quicker / nicer to edit.

If you load up a project with many segments (particularly video segments), and scrub while holding down the alt key, temporarily disabling playhead snapping, you should notice there’s absolutely no delay in showing the video under the playhead. Everything is entirely fluid and smooth.

iOS recording for iShowU Studio

After completing the gesture recorder work, it seemed sensible to record the iOS device itself, directly into iShowU Studio.  It’s not hard to record using QuickTime X and then drag that into Studio, but it’d be better if Studio recorded directly.  That way you could have projects sized exactly to the size of the device, and thing would be … well … nicer!

It almost worked too!

Non Technical Version

It doesn’t work. Hmm. How much more can I say without getting technical? How about: it works if you just record the iOS device itself – but not if you record anything else at the same. In short, not what I had in mind when I started.

 

Technical Version

iShowU Studio uses a framework called AVFoundation, from Apple. In fact if you want to record the iOS device at all, you must use this framework. No problem, we use it already. How hard can it be?

  1. Getting hold of the device as a camera. Check.
  2. Getting a preview of the camera, including audio. Check.
  3. Recording through an AVAssetWriter via a Session. Check. Er. Not check. Opps. Bang.

It works if I just record the camera. But that seems a little limiting. Sure, it’ll work if all you want is the device itself. But what if you’re recording a companion app that the iOS device is talking to? Surely you’d want to record the screen at the same time?

And this, dear reader, are where the problems begin.

It seems (seems, because while I’ve submitted a DTS to Apple on this, no reply back yet …) that  AVFoundation “doesn’t quite work ™” when you start putting on the load.

Recording both a Retina (or 30″ panel) and an iOS session uses ~400-600% CPU load on my MacBookPro. After just a couple of seconds, while the recording seems OK, things are going pretty badly wrong in audio/video land behind the scenes.

When you finish recording & open the project, you can see all kinds of weirdness in the iOS video. Really odd. The sound becomes truncated and the video misses frames completely (yes, I do have code in there to check the destination writer is ready for more frames, so no I’m not swamping it with stuff it can’t handle).

So OK. Maybe using an AVAssetWriter is simply a bad idea and I should just use the MovieWriter instead. It’d be simpler. Less code. Better. Right?

Yeh. It would be. If it didn’t crash. Which is what it does, in the same place, after about 2-3s.

Again, it works if that’s all I’m recording. Use it in conjunction with recording the screen (either using the AssetWriter method or by using other independent sessions for the screen / camera) and you’ll get a crash.

Sigh.

So, as of right now, I don’t have a solution that works. I can preview frames. If I desire just the iOS device I can record that using both methods available to me, but both exhibit odd/crash behaviour in various forms when recording everything at once.

I could I suppose release a restricted version that doesn’t let you do anything other than record the iOS device, but that seems, well, a bit lame.  Maybe it’s a reasonable compromise?  Listen to me! Am I trying to convince myself here?

 

iShowU Studio Gesture Recorder now live on the App Store

It’s in the AppStore, yay!

Apple have reviewed and accepted the iShowU Studio Gesture Recorder iOS app into the iOS AppStore. If you haven’t read my previous post, the gesture recorder app is a way to get iOS device gestures recorded in iShowU Studio.

You can get it here: http://itunes.com/apps/ishowustudiogesturerecorder

Progress: The next step (which I’ve almost completed) is to have iShowU Studio be able to record the  iOS device directly, as an additional camera. This is sort-of-working here in a test build. The mechanics, UI changes and so forth are in place, but unfortunately I’m seeing some weird artifacting on the video itself. So, more research required. Odd, since it works fine on it’s own.

Record iOS Gestures in iShowU Studio

Gestures? What?

I’ve been working on a feature that lets you record iOS gestures (movement / touches) from your iPad/iPhone directly into iShowU Studio.   I’ll post up a new build for you to play with real soon!  It’s more or less done, I’m just waiting to see what happens with the iOS app (i.e: will Apple accept it?). This build of Studio is intended to become v1.2.0 (stable).

So what is it? Put simply, it’s shinywhitebox’s take on allowing you to represent gestures or touches that occur during the recording of an iOS app. Put simply, once you’ve got the iOS app installed (it’s currently in submission at the App Store) you can connect to Studio and begin recording gestures either directly into the timeline of the project or as “templates” into the Gesture Template Manager.

 

Design – why direct?

I spent quite a lot of time with different design ideas. The first design involved having somewhat fixed gesture segments, with properties such as “number of touches”, “direction of movement”.  I felt this was limiting and really just “meh”.

My second attempt allowed direct on screen editing of the gesture start / end. It was like a simple non-keyframed animation that had a start and end point.  You would modify the number of touches via the segment properties, but the actual placement and direction of movement were controller via two “handles” on the gesture itself which represented the starting and ending positions.  This was an improvement, but came clunky when I began to work out how to represent rotation. The interface and interaction quickly got messy, I thought.

So, attempt #3 was to discard the entire idea of “editing” the gesture and just record whatever happened on some touch surface. I began by thinking that the touch pad would be a good idea, then I thought: “if you’re recording iOS stuff, you’ve got the perfect recording touch surface - the device itself”.  The iShowU Studio Gesture Recorder was born.

Now, a Gesture segment is a container used to playback a series of touch movements that occur over time. If a built in gesture doesn’t fit your needs, no problem! … you can simply record whatever you want and have that played back instead.  Of course you can move / scale the recorded gestures as you see fit (which is pretty neat imho).

Lemme Know!

I’m always keen to get your feedback. Let me know what you think! Let me know what can be improved.  My next lot of work will focus on annoying things like the “blackness” that sometimes appears while scrubbing through the timeline, and better performance during editing.

Oh, and let’s not forget: I need to yet add the ability to record the iPhone/iPad directly into Studio. You can do it really easily right now using Quicktime (Yosemite/iOS 8 only), but it’d be nice to be able to make a recording directly without having to go through QT.

Also, be warned: If you try to publish a movie into iTunes Connect that is taken directly with QT, it will likely fail because the FPS is too high. Ask me how I know :) The upshot is that apps like Studio will offer some time saving, because by default everything will export at a sanity-inducing 30fps, and then (hopefully) iTunes Connect won’t throw a wobbly.

 

Customer in the Spotlight

Making Magic with iShowU HD Pro

Video thumbnail for youtube video iShowU - GAMES | shinywhitebox blog

This week we were lucky enough to catch up with one of our loyal customers, online gamer Marshall Sutcliffe, to hear about how shinywhitebox’s screen capture product iShowU HD Pro is helping him reach his followers – and make magic happen in the online gaming world!

Marshall Sutcliffe, American Podcaster and Broadcaster, uses iShowU HD Pro to record himself playing the online card game Magic: the Gathering, an online tradable card game, and connect with his 40,000 followers.

With over 12 million players throughout the globe, Magic is like many popular online  games, gathering whole communities with dedicated fan bases to expert players like Marshall.

Marshall’s regular recordings are used by websites or put up on his own YouTube channel, which have over 40,000 active watchers. Each video uploaded is 2-3 hours long, and as he does this twice a week, he knew he would require a robust program.

Why iShowU HD Pro? 

iShowU Gaming with Marshall Sutcliffe“I looked around at what my options were, and I really liked the layout and design approach to iShowU HD Pro. I use the program multiple times per week as part of my work routine and it has never given me grief. Over the five years I’ve changed computers, used it on both a laptop and a desktop, and never had any reason to change it. It does what I want it to do, every time.”

“I would recommend iShowU HD Pro for anyone looking to do any type of screen capture work on a Mac. It’s robust, solid, and works with all of my microphones and software.”

 - Marshall Sutcliffe, Podcaster/Broadcaster/Poker Player, USA.

Download Your Free Trial of Studio HD Pro

About Marshall Sutcliffe:

Marshall is part of the Magic the Gathering coverage team, often appearing at Pro Tour and Grand Prix tournament hosting video coverage of rounds. He is also the writer of the Limited Information column on magicthegathering.com and host of the Limited Resources podcast, showing his love for Limited as a format. Marshall is also an avid poker player. To find out more about the card game magic, or about Marshall, you can visit his website here.

iShowU Studio spends time at the gym (flexible selection)

I’m not sure why it’s taken me until now (probably something to do with all that other work!), but I’ve just finished implementing the most requested feature from the initial beta test of iShowU Studio – selection of partial screen areas.

Simply put, this new feature lets you create iShowU Studio projects from a sub-region of the screen. That is, a “smaller bit” ™.

I’ve taken all the best bits from both iShowU v1, iShowU HD and Studio and wrapped them all up into a single selection interface. You can:

  1. Select by drawing a rectangle (like in HD)
  2. Move the selection rectangle by dragging it about or resizing it (v1 & HD)
  3. Still select the full screen (easily!  … ala Studio)
  4. Move it using keyboard arrow keys (*new!* taa-daaaa).
  5. Resize it using keyboard arrow keys (just hold SHIFT at the same time)
  6. Select any window under the mouse (like in HD… but in Studio, it’s not a mode… just press option and ta-daaaa, done).

I’m actually pretty excited about this one. It brings new flexibility and I think I’ve managed to make it both blend with the existing ethos of Studio and yet present as a really simple to use set of functions. Yay.

Here’s a demo of me playing around selecting different areas on the retina. The demo doesn’t do it justice tho. You have to have a play with it to see just how easy it is to select whatever area you want now.

 

Tracking the gremlins in iShowU HD

This (and some of last) week have been spent digging into some gremlin like issues with iShowU HD. I’ve been focusing on:

  1. Why the heck the silly “can’t find display <big long awful number here>” message appears for some users…
  2. How iShowU HD behaves in a multi monitor environment, and;
  3. Speed

The main thing I’m trying to do is understand what’s causing these irritating messages to appear. The reason it’s “hard” is that I don’t get them on any machine here.

To this end; I’ve made a number of improvements to the way the capture space is handled. In the process I think I’ve fixed a number of bugs, and sped things up at the same time. I’ve had early reports from some users I’ve been in touch with that the changes have been good.

Speed wise, it turns out that if “drop duplicate frames” is enabled (a sensible default, in theory) that capture can be quite slow for about 30 seconds. After which it magically speeds up. But annoyingly, it’s not just slow … it’s jerky and horrid.  So now I’ve made that option disabled by default.  It can be enabled if you need it, of course.  Oh, and wouldn’t you guess? It doesn’t happen on Yosemite. Go figure.

This brings it’s performance much more in line with pre 10.7.x. It’s faster to start capture and smoother to boot.

If you’ve any issues, don’t hesitate to get in touch over at the bleeding edge page on Facebook. It’s a quick easy way to get in touch.  I do read email, and support tickets but these typically have a longer turnaround time (day or two).

I’ve been keeping the rules tho: Haven’t fed the codebase water, fed it after midnight (debatable) nor exposed it to bright light.

Pick up the new build using the normal methods. i.e: software update within the app, making sure “beta updates” is enabled in the iShowU HD preferences software update pane.

iShowU Studio Quickstart videos are now online

I spent some quality time with Studio yesterday to produce five quick start videos for … iShowU Studio. These are aimed at introducing the app, its layout to both new and existing users. The series covers some basic editing as well as touching on a couple of more advanced features such as freeze-frame and pan/zoom.

These videos are now online, available both in our video section and also as a playlist on YouTube.  Here’s a brief synopsis of each:

  1. Introduction, what Studio “is” and a quick tour of the UI. I also covered basic insertion of shapes.
  2. Basic Editing, how to select objects, multiple selection, lasso dragging, delete, trim and region cut
  3. Properties & Visualization, modifying the look of objects and using mouse & keyboard visualizers
  4. Advanced Editing, introduction to Pan/Zoom & Freeze Frame, an also freezing using an audio insert. Examples of all of these are included in the video.
  5. Sharing, how to share as a QuickTime file, sharing to YouTube and also authentication with YouTube via oAuth.

Probably the hardest part (for me) was making these videos short. As a precursor to this effort I spent some time planning a table of contents of all features of Studio. If I were to keep the videos at about 4m each I think I ended up with about 17 videos in total! When you pause and write down all the little things Studio can do, you end up with a lot of material to cover!

As always, I’m keen for further suggestions for more video.  I feel “aware” that these cover the basics, and those users who have more experience may be interested in more complex workflows. Fire suggestions my way!

I feel like I’m getting a handle on how to produce them reasonably quickly. These five took me about 8-10 hours and that time includes script writing, proofing, production, encoding, uploading and blog posts.

I’ll probably make a post at some point about the workflow I’m currently using (it’s essentially: all audio first, make the video fit). I’m finding that doing the script first, audio and then video means I have very very little rework (if any, in fact).

Don’t leave all the loot for Neil!

It’s me again :)

I thought I’d do something a little different (you know, because I can’t spend all day coding, right?) and run a competition involving iShowU Studio. The idea is simple: You create a awesome looking small video showing how you make use of Studio and we give away some neat prizes including an Apple TV, iPod shuffle and iTunes vouchers.

Full details are over on here on the main website. You can register from that page and we’ll send an confirmation email a few minutes later along with instructions and super interesting things like terms & conditions :)

I’m keen to see what people come up with! I’ve been talking to a few people recently and it’s been exciting to see what people are doing with iShowU. I’m hoping this competition shows up some more projects that people are working on, and I’m looking forward to the response!

Why are still reading? Go over to the competition page and register now!

Layout text using Cocoa? You might want to rethink that!

I’ve recently been refactoring the text implementation in iShowU Studio.  Previously I had used Cocoa’s built in NSString additions (drawAtPoint: etc), which appeared to work pretty well apart from some minor hiccups trying to get the text vertically centred.

In the initial release of Studio I got around the vertical problem by munging the values. The font was fixed, so I could get away with that.

Version 1.0.4 of Studio introduced the ability to change the font. Things started to go … quite … wrong.  Cocoa’s idea of what constitutes a text rectangle (aka: bounding box) is weird, and often just downright plain wrong.  Here’s an example:

Not what the doctor ordered. Very incorrect bounding boxes.

Not what the doctor ordered. Very incorrect bounding boxes.

The red line represents what Cocoas NSString additions think the bounding box is. Notice it’s too tall. That’d result in the text being too low when vertically centred. Sometimes it’s right. Sometimes it’s not.

The blue box is what NSLayoutManager thinks. Um, I’m not a rocket scientist, but I’m pretty sure that text is not going to fit in there.  What’s worse is that sometimes each of the methods above gets it right, sometimes not. And they are both inconsistent. Pretty much completely useless if you want to  present correctly vertically centred text. And before you ask, because I just know it’s on the tip of your tongue, yes I did set the NSTypesetter behaviour to NSTypesetterBehavior_10_2_WithCompatibility.

In short, unless I’m doing something stupid (always possible) various fonts yield very very different bounding boxes. Nothing is consistent, so if you’re going to try to layout this text vertically centred, good luck with that.

CoreText – no so hard after all

I scratched my head over this one for a couple of days, pondering what to do while I worked on some other code. Enter, CoreText.

Turns out it actually works. The bounding boxes returned are way more sane. Here’s the same text, with the same font, but rendered using CoreText:

Look at that! It's a sensible bounding box!

Look at that! It’s a sensible bounding box!

I must give credit to Jjgod Jiang for his slides on text layout. With that I was able to create a simple class that’d a) perform the right layout and b) provide decent bounding boxes that were consistent across all fonts.

So, if you find yourself struggling to perform layout with the Cocoa NSFont/NSLayoutManager classes, or if you want to discover correct bounding boxes for arbitrary (and possibly multi line) strings, turn to CoreText. It’s simple enough to use, and it actually works.