In this previous post I mentioned that we at TEL in Durham had been running some studies using the Kinect with SynergyNet. Though data analysis is still being carried out on the results I’ve decided to provide some details on the system, its working and its capabilities, in addition to some of the initial findings.
We recently hosted a day of demoing SynergyNet being used in our lab at TEL-Durham. Resulting from this a number of articles have appeared online about our work with professional pictures and videos. In addition to this we took the opportunity to make our own videos which are available in this post on the SynergyNet blog.
As mentioned in the recent post about using the Kinect with TUIO, I’ve been working with the Microsoft device to create a method of classroom orchestration for teachers. With the Kinect now playing nice with our multi-touch framework we’ve let a number of teachers loose trying it out. Some images of the work so far can be seen on the SynergyNet blog.
We’ve recently been working on utilising the Microsoft Kinect with the SynergyNet project at TEL in Durham. I’m currently working on several publications concerning this work which I will post updates about when they’re finished. In the meantime I thought I would post a small application I wrote when I was getting to grips with the Kinect.
All current versions of SynergyNet, TEL‘s multi-touch framework for classrooms, have support for TUIO. However, anyone who has tried to use the software may have noticed that its TUIO support can be flakey at times. This has been amended in a recent update to SynergyNet 2.5 and 3.
Recently we’ve made the push at TEL in Durham to make code relating to work developed in the SynergyNet project more accessible. We have now started an effort to create guides for getting started on developing with our software. This post details how to build a development platform for our video time-line analysis tool; SynergyView.