We’ve all looked up videos on Youtube, Vimeo or TED. Perhaps to learn something, be entertained by a yawning cat or ideally a compelling speaker. The goal of this type of search is to bring you a video that will impart a skill or idea to you and for the above vendors, hold your attention long enough until another video can be displayed compelling you to click on it next. The main motivation for them: show interesting videos but take as much of your time as they can to maximize ad revenue.
But of course for us in learning and knowledge management we have a different motivation. We want our teams to be engaged with video content but with minimal time spent searching and finding the right answer within the videos. But for most of us the experience of searching our current content repository, LMS, LXP for video makes us brace ourselves for the long ride of being taken to a set of videos requiring us to watch different segments until we get to the one we’re looking for. We’ll spend more time than we like and probably not find exactly what we were originally looking for. But there is good news. There are technologies both established and new bringing about change to video search. As a result, we have our 3 reasons video search will now start to be great.
1) Knowledge management and learning vendors can now search through transcribed narration from all your videos.
This has been available for some time, it means that every video scanned by these systems can transcribe the narration into text and add to the scanned meta data surrounding any given video. Systems with this capability will very likely get you the video or videos that you’re searching for. Systems by Degreed (LXP), Bloomfire (Knowledge Management), Docebo(LMS) Workday (HRIS & LMS) and others focusing on video content feature this. The goal is to get you to the desired video that has either been bought or curated by your organization’s team or those videos created by your colleagues. Recommendations, “Likes” or “Thumbs-up” helps in this process as well. Most of these systems combine auto-tagging as well which helps cull the herd of viable video content and improve the experience. The focus is to minimize time to the right video or videos which compared to the above youtube/ Vimeo/ TED experience is a huge improvement.
2) There are now video and content systems that can pinpoint and take you to the exact moment within a video to answer your search request.
At Feathercap we have this capability, but we believe this will become a trend for the industry. Our video search for example combines turning video audio narration text into searchable content and then combining with our AI driven search to pinpoint and take you to the exact moment in your videos to answer any question. This means you can ask us a question, we’ll pull out in text form what we believe is the sentence spoken in the video that answers your question. Beside the answer text is an accompanying thumbnail. Selecting the thumbnail will take you to that specific moment of the video which answers your question. If your question is answered at 21 minutes and 30 seconds of a 30 minutes video, the accompanying thumbnail will take you to that exact time stamp.
3) Actually finding the answer you’re looking for within any video!
This third reason is really the benefit from the above technology improvements to video search. Instead of wasting time combing through the right video only having to spend time scanning through it until the exact right segment is found, this is done for you. Imagine having all those existing webinar, demonstrations, company how-to videos all available to give you answers without being forced to waste time sitting through more of them then you have to. This also means the speed advantage in creating user generated videos can be equally maintained by enabling everyone to instantly experience those gold nuggets of knowledge within them.
See our AI and the Augmented Workforce primer on how a workforce and technology can effectively work on tasks together.