A new approach to VDI and a cloud-born infrastructure


This is for all the techies out there…

A quick update on my new job thus far.

In July of 2013 I started working as the tech dir in at Vistamar School, a small independent high school in Southern California. The school has 260 students, 40+ faculty and about 20 staff. There are 21 classrooms, a gym, and the usual science labs, open spaces and offices. On the IT front, there is a computer lab with 18 Win PC’s, and a mix of Windows PC’s, iMacs, MacBooks and newly-added Chromebooks floating about throughout campus. Wifi is available all over. All classrooms, office and shared spaces have IP phones installed.

What’s unorthodox about the tech setup here is that there is an onsite cloud – a full-out VDI. For such a small school, it is not the norm. What pushed this approach forward was that it was see as the only way the school could “level the playing field” by giving equal – online – access to the same version of MS Office to students and faculty. Students who do not have an up-to-date computer at home do not have to worry about updating to the same version of Excel when their science teacher requests work be done using version-specific features. The system also provides students with a universal way of signing into their cloud desktops and accessing their files anywhere, any time. Lastly, it was seen as the way to rely less on desktops and more on less-expensive thin clients in the future.

The launch of this cloud initiative was marred by problems. Connectivity, reliability, ease-of-use, speed, you name it, it was an issue. How this came about is that the then-net-admin saw it as a way to play with a new, albeit expensive, toy without having to pony up the dough himself. His training was fully covered, and once he felt competent enough he promptly made his way out to a higher-paying job. The whole project was actually planned and installed by an outside vendor – anyone who’s ever dealt with a network that was deployed by an outside consultant knows why there are some problems here.

Any ways, One of my main issues with this approach in a school is that there are a lot of graphic-rich applications and audio needs throughout any given school year where VDI is not the best option. Additionally, by going this route a certain specialization – an expensive one – is needed in-house to keep the whole thing humming. An IT team of 1.5 is certainly not enough given other high-maintenance systems such as IP phones, classroom audio/video needs, faculty/student support, network and wifi support, web portal, etc. Faculty professional support, student/family outreach and proactive planning fall by the wayside given the constant putting out of fires.

On the back-end there are two terminal servers and many other virtualized servers. Off-campus users log into through Classlink (RDP via an internal HTML5 gateway) using a web browser in order to gain access. On-campus users log in via a direct RDP session to an assigned server. One terminal server is for faculty/staff, and one for students – no load-balancing is set up. The platform rests on three HP hosts running VMWare VSphere 5.

In theory it reads like a great setup, and a win-win for students and faculty. In practice, however, it’s another story. There are many reasons why it did not worked up to expectations. The hardware is not enough to sustain so many concurrent sessions. Additionally, when launched, there was a 20Mb Internet uplink on campus, making multiple sessions via the online portal dreadfully slow. Something that complicates matters a bit more is that faculty and staff use an Exchange server while students use GoogleApps for collaboration and communication. You can imagine the complexity of setting up and then troubleshooting GPO’s once you get into the nitty-gritty of Active Directory.

The phone system is VoIP, there is a robust Meraki wireless network, and a slew of services to end-users, BUT the network is not VLAN-ed, QoS-ed, or even physically segmented to optimize data throughput on campus. Many challenges to tend to with only one full-time techie/netadmin/do-it-all (myself), and one part techie / part ed-tech integrationist / part tech teacher.

The challenge is the level of specialization needed to host and run your own virtualized server farm. Going this route requires more (wo)man-hour resources and hardware on the server end. Additional resources are required for faculty and other end-user training, along with the time to carry such a program consistently. Organizationally there must be a clear commitment to providing the space and time for this endeavor to get off the ground successfully. It’s a difficult sale given other demands on teachers’ time.

The first thing I did when arriving was to increase Internet bandwidth to 100Mb. Though it is not enough, it makes things a bit better. A few small config changes have been made on the servers, but not nearly enough to get all issues resolved. Because I started in July, when all faculty and lots of staff are on vacation, it was tough to gather enough info to make major changes without getting myself into a jam. I started the academic year with things pretty much the same way they were left off from the previous year. Not being here a couple of months earlier has added 12 months to my timeline of major revamps to the system. I must be careful not to disrupt the day-to-day teaching and learning symphony at play here.

During the 12-13 academic year money was raised to get all faculty laptops for 2013-2014. The Lenovo hybrid Helix was selected before the summer break. Once the machines were purchased it gave me an opportunity to get all faculty on GoogleDrive instead of storing their files on campus servers (all faculty have GoogleApps accounts though they are not using the mail feature yet). This has made the faculty more independent. Trying out Windows 8 on newly launched hardware has not made for the best of times, but I am happy that our faculty are so understanding and patient.

To make things move a bit more efficiently I changed our Internet traffic filter from a Prism box over to OpenDNS. It makes things much simpler, and takes one more thing down a notch on the maintenance front.

These are the major impacting factors that have made an appearance this year:

  • Faculty got their own hybrid mobile device – no more dependency on a desktop or virtual session.
  • Faculty were moved from network storage to GoogleDrive – mobility with a safety net.
  • The wireless network was upgraded to provide more coverage.
  • Internet bandwidth was increased five-fold to 100Mb.
  • Training was focused on utilizing and maximizing Google Docs, GoogleDrive Sharing and GoogleDrive storage.
  • A Chromebook cart was made available for all to share – more GoogleApps opportunities.

Still to come for this year are newer digital projectors in classrooms where the current projector is way old, and Chromecast devices to allow for fluid wireless streaming from the faculty devices.

There are still challenges, and finetuning the virtual server farm is an ongoing chore. However, having faculty mobile is the best thing we’ve done to positively impact on this learning community.