Cialis medicinal variation is its extended 50 %-lifestyle Raspberry Ketones Raspberry ketone curvelle

ToutVirtual to offer product suite via BT Engage IT

Delivers single view to manage virtualization and cloud computing infrastructure

Carlsbad, California – ToutVirtual, Inc.  an emerging leader in virtualization intelligence and system optimization software for cloud computing infrastructures, today announced that it will offer its product suite through BT Engage IT, the IT services division of BT Business.

The VirtualIQ suite of products is designed to support virtual server room operations through three stages of virtualization — design, deployment, and delivery — helping users make correct decisions for virtualization optimization.

The VirtualIQ dashboard provides decision-making information in a single, integrated web console that is simple to install and use.VirtualIQ provides a ‘single pane of glass’ across physical and virtual infrastructure, including support for all leading hypervisors from Citrix, Microsoft, Oracle and VMware.

“Hypervisors have become a commodity,” said Andy Nabbs, business manager for virtualization and end-user computing, BT Engage IT. “The key challenges faced by customers are around management processes for virtualization and cloud. ToutVirtual’sVirtualIQ products help by offering a ‘single pane of glass’ to give holistic visibility and control of their infrastructure.”

“The signing of the agreement with BT Engage IT is an important milestone for our company,” said Jess Marinez, president of ToutVirtual. “As we are increasingly gaining market recognition and sales traction, BT Engage IT’s wide reach to customers in the UK provides us with the ideal sales channel for our VirtualIQ line of products.”

Add comment June 1st, 2012 Administrator

The Struggle of Virtualization?

2011 will be a lack luster year for virtualization to be sure

The end of 2010 was a struggle on a personal level for me.  No, I will not explain why, some things are personal.  Let us say my trails had nothing to do with technology.  But events of late 2010, did give me pause, to think about myself, and about the enterprise information technology industry that I have been in for more than 20 years, if you count computing in general, closer to 30 years.  Sure information technology has changed, but in reality it has reinvented its self more than actually changed.  Faster components, large scale, moreover, name any aspect of information technology since the late 60s, and it is not new just changed.

This same trend is true for the people that work in the information technology industry.  Be it operations, engineering, or architecture, the roles have expanded and contracted, trends come and go, but the heart of the industry has been people.  I have made this point before, to be sure, but the emphasis on technology replacing people somehow, now in 2011 seems more acute or focused on people, than any other factor.  Why?  Cost of course.  The answer is obvious.   Virtualization has created a commodity perspective; why else would a cloud strategy be so popular?  But that does not explain it in total, the idea that information technology is a competitive advantage is still true to a degree, but the idea that long term investment, meaning people, has been lost along the way.

True, a few earth shattering concepts have not changed the world quite the way we expected.  Whereas a few more have… for example, does the cell phone you have now, qualify as a phone or a mobile appliance?  How many processors does it have?  Some of the latest mobile appliances have two processors!  Could we have ever envisioned the need for 2 GHz in CPU capacity in a mobile appliance?  That mobile devices, never mind video game consoles leverage virtualization?  Maybe 5% of the world saw this coming a few years ago, and guessed right, thus maybe 1% made real significant money off of it.  Ignoring the success of Apple, of course, Steve Jobs has a long history of good and bad guesses about where consumer electronics would go.  Did someone say Apple IIgs, Macintosh IIcx, or Apple Newton?

So, what has virtualization been doing for the last year or even two years?  Not much.  Virtualization as a commodity has been the mindset.  Every vendor that has any ideas that are original about virtualization, has given up, disappeared, or refocusing on the management of virtualization, the clouding of computing.  Pick any segment of the information technology market, only to return to the same concept that there is nothing new under the Sun, just repackaging, realignment, rehashing.  Is this true?  Has the information technology industry reached its zenith?  I would say it has for the most part.  Especially in respect to virtualization implemented with hypervisors as the architecture foundation, right?

Sure, solid state, state-less, disk-less, blah. Blah… will continue to evolve.  But that is not real change, just morphing the old into newer packaging, performance, or scale.  There is no innovation taking place, not in any real sense.  Where have all the dreamers gone?  VMware is entrenched around ESXi, this is not a negative.  It makes sense, ESXi is easier to support, or should I say it in total truth, ESXi is cheaper to support.  KVM, Hyper-V, etc. are adding features to improve management that is it, period, and end of story.  Xen, well, Xen is trying.  Oracle, speaking of Oracle, what is going on… LODMs technology had the potential to really slam the monolithic hypervisor, what happened?  Virtual instances that so not replicate the OS, hello, did this idea just fall of the edge of the world or what?  AIX with its micro-partitioning, the micro-LPARs concept is significant, but only to the AIX world.  And zSystem based Linux zLinux has some potential but is the actual partitioning is not within the Linux OS space is it?  Don’t even bring up the Windows Azure concept, Microsoft has mastered the art of old-is-new, excuse me while I am ill!

So, what is 2011, I call it… The Year of Optimization!  Just that and nothing more, because whatever virtualization you have, you will tweak it, tune it, and call yourself successful.  A number of entities will add additional different hypervisor architectures, to establish niches, VMware and Hyper-V, or VMware and KVM, or KVM and Xen, etc., etc., etc.  The days of monolithic infrastructures based on one key virtualization vendor are over.  If you don’t believe that, then just look at what the big hosting vendors are doing… KVM, Hyper-V, VMware, Xen all under the same roof?  Of course, but, and this is the kicker… no one wants to increase staffing costs for the team of dynamic, resourceful, personnel that understand the complex environment at the operational level do they?  Sure, architecture and engineering costs are small compared to operational costs, right?  And with every single vendor out in the world promising clouding, what are information technology managers to do?  That is easy… cut costs, after all a service based industry does not need skill or talent to any large degree.  Go global, India, China even South America do information technology as good as the United States, Germany or even Canada right?  I wonder?  Do they?  Do they really?  TCO calculations for CEOs are like statistics… you can make the numbers say anything.  Especially when the accountants and managers roll over every few years, cough.

I leave you with one last thought?  What happens when you have tweaked, tuned, and such your virtualization infrastructure.  When you have found the best TCO model for your specific company, firm, or organization?  What will you do next?  Even a better question is… will you have the talent and skill, in house, to do the unthinkable?  Innovate?  What a struggle that will be, no?

Add comment January 15th, 2011 Schorschi

The Strongest Aspect of Virtualization Is the Human Element

Critical Observations – Chapter 02

Read the title a second time. It is not a cliché although it may sound like one? Having watched various virtualization strategies tried over the years, there is one common or key element, the human factor that makes, breaks, or otherwise confounds some when attempting virtualization or continuing to be successful with virtualization. How it confounds those that misjudge or miss value the human aspect of virtualization, varies. But I have observed the following scenarios take place.

1. Lack of Talent
2. Lack of Commitment
3. Lack of Leadership
4. Lack of Stability
5. Lack of Value

Lack of Talent, is the obvious pitfall, to be blunt and tactless, but there is no other way to explain this failure of the human element. An organization must have the right people, at the right time, to do the right job, inclusive of architecture, engineering and operational support, or the virtualization initiative will fail. I know of one firm that had at one time of more than 25 people struggle for a year with virtualization, and never got the virtualization infrastructure off the ground, out of the lab. I know of another firm with a 5 person team that grew to 7 over a few weeks time implement a virtualization solution in less than 100 days, which saved the firm more than 8 million in less than 1 year, and almost the same the following year.

Lack of Commitment, this factor I have discussed before, this is the idea that management from the top down, supports virtualization, in real, factual, terms, call it putting your money were your mouth is, or championing the initiative, whatever it is called, it is one thing… support for the virtualization strategy from all layers of management, hands down. Establishing a virtualization based culture and attitude, that in effect makes everyone see virtualization as the preferred solution because it is the right solution where appropriate.

Lack of Leadership, this is related to commitment, but different. Unless there is a strong, forceful leadership exhibited throughout the lifecycle of virtualization, which takes years, and significant cost of capital before savings are realized perspective, remember 1 year is a long time for management to wait, virtualization cannot survive. Pre-provisioning must be done; realignment of resources has to be done, etc. Management is not always great at taking risk, when said risk cannot be rationalized away as a nonfinancial initiative. Virtualization is a financial risk, if someone else says that is not so, they have no clue what virtualization as a strategic objective is or requires to succeed in the real world.

Lack of Stability, this is an intangible factor that can impact virtualization in an indirect fashion. If a firm is constantly changes from an organizational perspective, and such a corporate variance is not core to the culture of the given firm, virtualization often becomes fragmented, to many owners, designers, or implementers. Standardization is a foundation stone for virtualization, at least from the host infrastructure perspective, if not the virtual instance operating system baseline perspective. Cost avoidance comes with infrastructure reduction, and consistency intertwined with the logic and objectives of standardization. Standards only exist if implemented. I had a boss once tell me that comment was stupid… That boss was not around for long, by the way. If an organization is not stable and consistent, in a way that it can live with, then virtualization is all but impossible, because the first thing lost is uniform use of established standards.

Lack of Value, this is in reference to the human element without question, not justification of a virtualization strategy from a cost savings or cost-avoidance argument. Moreover, this may be the most important issue of all of the above points. For example, in a hypothetical context, a firm has the right talent, comment, and leadership in place. The firm is stable and aggressive in its development of competitive advantage, which in no small part is based on cutting edge virtualization design and implementation. But this firm does not value its architecture, engineering or operational staff in some way? What happens? Failure to establish competitive advantage based on virtualization, hook, line and sinker. Don’t believe that there is one architect, and one engineer, and even that one operational support staff person that is critical to virtualization success? Well, look again, they are there, they are sometimes visible, sometimes all but invisible. But they exist! If you don’t identify, know, and retain these key human resources that designed, developed, and implemented the virtualization 1.0 strategy in place, the ability to establish 2.0, 3.0, etc., which will continue to save a firm millions for years if not decades to come, knock on wood, will disappear as if by magic. Worse, by letting these key talents walk, a competitor has or will gain as if by magic much if not all of the benefits from years of design and development blood sweat and tears, for a fraction of the human cost. Don’t think this is real? Talk to the headhunters that call these key people each week!

The greatest threat to virtualization strategic planning and implementation, both when starting a new strategy and critical to the continued success of a maturing strategy, is when the economic climate is negative, because every decision that incurs significant upfront capital expense is painful and under extreme critical review before approval. For a virtualization strategy that is mature, and successful this may be extreme, since management is trying to squeeze blood out of rocks, and virtualization is always seen as an obvious target for management to improve expense reduction against. Thus impulse is to not recognize or reward talent, to take said talent for granted, or disqualify the potential for lost of talent. The headhunters know this, and they are out in force, right now, because key talent saves real money, especially when said talent can be stolen away. Recognition and reward funds are always hard to come by at the bottom or while sliding down an economic slump, but to not realize that the best and brightest can leave in seconds? Not take serious action to keep key talent? Well, it happens does it not? Talk about handing the competition a straight forward and effective way to gain or negate competitive advantage! That is the definition of the word… Dolt.

1 comment August 27th, 2010 Schorschi

Is It Lipstick On A Pig Time, Or What?

Ok, boys and girls quickly, line-up, single file, if you please… Those with walkers, canes, and motorized chairs to the front please…

Why do I think VMworld 2010 is going to be a boom or a bust? To be honest I don’t know. Having been at every VMworld conference, having enjoyed them all, having been, excited, intrigued, etc. over the years, what has me yawning as I write this? What is different about this year? Have I become jaded, at last? Since I live in Southern California, the flight to Northern California, San Francisco, is not a lot of effort. I will see some friends at VMworld like I always seem to. I will bump into fellow technological peers in virtualization, in the flesh, versus carrying on discussions, rants, and debates via electronic transport. The exhibits hall will be heavy on vague references to sex, rather than product merits presented. The official sanctioned bloggers from VMware will of course ignore me, as usual. Someone will make negative comments about me, as usual, etc., etc., etc. Guess I could go on for a page or two in this blog on what VMworld is like. And no, the weak economy is an easy out for why VMworld might be a bit different this year.

VMworld has gone stale? Is VMworld not supposed to be going to Disneyland? No matter how many times you have gone, you still enjoy it? Have I changed or has VMworld changed? Well, maybe both? Anyone really seriously looked at the attendance demographics of VMworld say for the last 2 years? Well VMware does not publish it, that I know of, which is not a ding on VMware, at all. Thus, my observations or recollections are qualitative not quantitative… For the last 2 years, I have been hard pressed to find any significant numbers of attendees under the age 35. In fact, it seems to me, beyond the aging of the attendees, I believe the number of women in technology, at least in virtualization, may have declined? I believe the majority of the attendees are getting older, old males to be specific, per my perception. This perception is reinforced by the selection of the Wednesday night party themes. Gone are the fun, interesting, events, geek oriented, like robotic combat, in favor of aging rock bands or such, entertainment. Even had trouble finding the game console room for the Alumni, anyone remember that one? Where you could play video games and eat high sugar items in the lounge. Hey, duhe, stop jamming me in the ankles with your motorized chair, ok… I am in line to get my bling bag, cough, official VMware 2010 sponsor advertising bag just like you, chill, ok?

Sure, this is not a VMware issue, the entire fabric of the IT industry is aging in reference to human element, in the domestic United States. VMware has become less geek oriented over time as well. That may not be a good thing. It was the geek-ish-ness of virtualization that had the younger generation curious and engaged. VMware wants to appear like another Apple Computer? But is still trying to emulate Microsoft? VMware is big, not without reason, maybe more conservative now, than is beneficial? I guess VMware needs to decide if it is an innovator still, or a holding company? Their last 3 or 4 years have been, no doubt, holding company oriented, with some incremental innovation. VMware learned from Microsoft and to a lesser extent RedHat, that buying solutions or even eliminating potential competition is a proven strategic path. But as VMware acts like an old folks firm, it gets comfortable, maybe even lazy? Do not forget, that ESXi stateless was a hack, that was done outside of the normal process of development in VMware… oh my… what am I saying… ESXi was a quasi hack, early on! Good grief… VMware needs do get back to more of this innovative mindset.

What has this change of focus cost VMware? This loss of innovative spirit that could be said once dominated the entire corporation? Not sure it has that much given VMware is still well in the black, but it has changed the culture of VMware. I believe the culture change has pushed some great ideas out of focus, whatever they were, since VMware is slow to discuss such. Not all, of course because VMware will have some official stateless model soon, VMworld 2010 is a perfect time to show off such. Even if effectively it is about 2 years after the actual idea hit the internet. Moreover, stateless is core to clouding. Clouding has taken off, and VMware does not own the dominate clouds way VMware wanted to do so I suspect. More often than not, number of foundation stones in a cloud design, are more then vSphere. As KVM, and in some cases Xen is, even Hyper-V, run in parallel. I still have my doubts about Xen, and I am not a fan of Hyper-V. But, dominant cloud ownership has slipped through the VMware fist, not that VMware did anything wrong? Well, the world has changed; no one wants to be locked into a single vendor or core architecture. IBM is learning this, in a very painful way with System-P based architecture, for example. Google the number of Cloud providers that use VMware, but also use Hyper-V, KVM and some even Xen for their respective hypervisor based infrastructures, how many support multiple hypervisors? Most if not all… moreover, look at the application virtualization space, and see what is coming on fast? Or in Grid computing? Even virtualization on Mainframes? Parallel lines across multiple vendors is the norm not the exception… this cannot make the individual vendors happy, including VMware.

However, I think the single opportunity that would fire up everyone, would be for VMware to officially spin up a division to get embedded ESX, and maybe even call it something other than ESXe? As a supported but free offering, maybe revitalize VMware Server or player but not lock it into Windows? Support it other variants of hardware devices, make it very hard for Google, Amazon, or anyone else be silly not to use it, or endorse it for non-datacenter models. Note I said hardware variants, iPods, MP3 players, ATOM based net-books, etc., targeting, small embedded systems are begging for mature virtualization. Was not virtualization for cell phone based processors all the rage in the blogs about 2 years go? Many of the various technological industry watch sites were talking up this right? Hello, why is VMware hypervisor licensed technology not in very automobile on the planet, running under Microsoft Sync for example? So that customers can move their virtual personality from desktop, to laptop, to cell, to automobile, to heck, even the computer built into the airline seat in front of them? The virtualization industry needs a shock, and I would like to believe that VMware has the will, and the talent to do it, at least one more time.

If VMworld 2010 is just another VMworld going down the same path established from say 2007, because VMware did not allow competitive ideas or solutions again to attend as in the past, or the only innovation promoted is an official stateless design? Or worse just more vCenter plug-ins, or vCenter enhancement toys? I fear just VMworld 2010 will be lipstick on a pig… which is a horrible metaphor… but applicable, no? Oh, I wonder how many ‘Got Xen?’ shirts I will see at VMworld 2010? I think we may see some got ‘Get KVM?’ shirts this year, or even funnier would be ‘Have Hyper-V R2?’ shirts. Poking VMware in the eye, a bit with such shirts, VMware may not appreciate, but it does get some energy levels up among the attendees, and that is good thing, right?

Add comment August 25th, 2010 Schorschi

Lost Understanding and Mismanagement, One Way to Kill Virtualization

Critical Observations – Chapter 01

First, yes I have been on hiatus, new home, new neighbors, etc. Moving is not fun, at least not for me, I hate packing and unpacking. But I do love I seem to find ways of donating, or losing tons of stuff that I just don’t need. Where I moved to, well it is in the Southern California desert; it is a bit warm during this part of the year, every day over 100 degrees Fahrenheit. But the view is wonderful if you like desert scenes. The community overall is older, so I feel like the younger set, even though I am in my forties! My Dachshund gets lost in the new house it is bigger than the past one. But I digress… this blog is about virtualization.

A good friend of mine lost his job over a year ago, he has struggled to find work, but recently has, with a smaller consulting firm, yes IT consulting of course. He has been telling me a number of stories about the clients of the firm, since I started in the IT industry with my own very small firm, so we share some common interest on past experience about unique consulting experiences. But one specific story struck me as significant and odd… a firm had been doing virtualization, in this case based on VMware, although the platform is not material in this case, for years and surprise or not, was now seemly retracting from virtualization, in the sense that, the rate of virtualization conversion was dropping off, not just slowing because all the easy to migrate to virtualization was completed, but seemed to be in a resurgence of hardware acquisition trend. Talk about bucking the industry trend?!

As my friend explained the history of the client, the change in direction become more obvious and overt, my surprise increased accordingly. For I have seen organizations change strategies with virtualization before, but never abandon virtualization. In truth this organization has not quite given up on virtualization, but they definitely were struggling with virtualization. I started asking questions, being intrigued to a greater degree, as the discussion over lunch continued. Did they change business goals or objectives? Completely change strategy for IT such that virtualization was somehow not appropriate? Did they depend on a specific application solution now that provided less than wonderful performance on virtualization? Where they out-sourcing their IT infrastructure such that virtualization managed by them was not appropriate? Were they experiencing chronic or systemic issues with virtualization? The answer to all of the above questions, was the same, no, no, etc., no. Well if it was not a business or technology change that was resulted in an operational change, it had to be personnel related, right? So I started asking questions again… did they lose their architecture or engineering IT expertise? Did they lose their IT operational support expertise? No, and no. So I asked the question that I believed would expose the issue… Did they lose the senior management support that championed virtualization? Yes.

This is rare but can happen. So I asked more questions… How exactly did they lose support at the top? The answer was… between staffing changes, reorganizations, etc. The newest iteration of top management was less confident with virtualization than the previous management had been. Virtualization was now understood at a logical concept level only, not at a technical level as once before. For want of a better term to illustrate the situation, management was dumber than ever before about virtualization. So as different groups in the firm, griped about virtualization pitfalls or quirks, which happens at times, management began to cave into the pressure, rather than reaffirming the benefits of virtualization, in effect, creating a counter-culture against virtualization.

The story does not end here fortunately, or should I say unfortunately, because the top management after some time realized they were off track with virtualization and now have a significant problem to get back on track. Virtualization was never abandoned completely, but it was demoted as a priority to the extent that parts of the organization established their own objectives around virtualization, to a degree that re-integrating and re-alignment will be painful. That significant time and resources taken from the core focus of the firm, must be used to resolve the inconsistencies of the situation.

How does an organization avoid this? That is a good question! This situation is difficult to avoid, since the very individuals that need to understand the issue and thus avoid the problem before it impacts profitability are part of the problem when they should be part of the solution. The further up the chain of command, the easier it is, to be blind to technological issues. That is just life, because very few IT savvy managers become Presidents or CEOs. Worse yet if a CEO is not IT savvy to a reasonable degree, then the role of the CIO is misunderstood, misapplied, or ignored to some degree.

When I asked my friend how this situation was addressed, he told me, that his firm warned their client, in this case, that the situation existed. At this point, I was waiting for the details… on my plate was lunch, forgotten for the most part. He stated, that the senior IT manager in the firm, stated that the problem was understood, and being addressed. My friend then stated… well if you have questions, I have a good friend that for lunch would be happy to discuss the situation, and provide insight that might be of assistance. To this offer, or rather, volunteering of my expertise, which I often do over lunch, the reply was… no thank you… we have sufficient individuals that are certified and knowledgeable about virtualization. My friend nodded in understanding, or so he said to me, and let the issue drop.

At this point, I remembered my forgotten lunch, so as I munched on my turkey on whole wheat sub, and slurped at my diet soda, my friend said… I wanted to tell him that you could train his people in your sleep, that you have been a virtualization architect for more years than the client has been using virtualization, never mind that you often provide feedback that finds its way to the very people that developed the materials used to trained his people… but he was not going to listen. To this I just smiled. My friend continued… So I ask you, was the real problem that senior management now in power was less than knowledgeable about virtualization? To which I replied, finishing his train of thought… or that senior IT management was not managing up effectively supporting virtualization goals? My friend smiled.

The bottom-line is, my friend and the consulting firm his is now part of will profit from the situation as illustrated above. As for me, I lost a potential free lunch, but that is ok, I am sure the above organization is only one of many in the same situation. So I should get a few free lunches now and then.

Add comment July 28th, 2010 Schorschi

ToutVirtual Signs OEM Agreement with Symantec to Deliver Next Generation Business Continuity Solutions

VirtualIQ now adds Symantec’s Backup Exec™ System Recovery to provide High Availability and Disaster Recovery to virtualized environments

Carlsbad, Calif. July 13, 2010 – ToutVirtual, a leader in virtualization intelligence, optimization and performance-management software for virtual computing infrastructures, today announced that it has signed an original equipment manufacturer (OEM) agreement with Symantec Corporation to add Symantec’s Backup Exec™ System Recovery (BESR) solution to ToutVirtual’s VirtualIQ suite of products.  The strategic agreement will enable ToutVirtual to deliver simple-to-install, easy-to-use business continuity solutions that leverage virtualization for reducing business downtime. Combining Symantec’s BESR and ToutVirtual’s VirtualIQ creates a unique offering of high availability and disaster recovery with a platform-agnostic virtualization management suite—previously not available in the market.

 Symantec’s BESR will be offered as part of the next release of VirtualIQ Pro. VirtualIQ Pro, ToutVirtual’s flagship product, is a management and automation suite designed to support customers in every stage of virtualization deployment. The wide range of features helps users from the planning phase to established virtual server rooms running hundreds of virtual machines (VMs).  With the addition of Symantec’s BESR, VirtualIQ Pro will now also offer:

  • End-to-end business continuity on desktops, physical servers, and VMs
  • Continuous backups to eliminate backup windows
  • Application-aware backups
  • Fast disaster recovery anywhere in minutes versus hours/days
  • Disaster recovery to multi-vendor and mixed/dissimilar hardware
  • Disaster recovery to dissimilar hardware
  • Disaster recovery to VM on any hypervisor
  • Disaster recovery to remote location / cloud
  • Physical to virtual machine (P2V) and virtual machine to physical (V2P) workload migrations
  • Virtual to virtual (V2V) platform migrations supporting popular hypervisors including VMware, Microsoft, and Citrix

 Many small to mid-sized businesses (SMB) and data centers still rely on tape-based backup solutions for their disaster recovery plan. In the event of a hardware failure, the recovery times of tape based backups can take from days to weeks. With ToutVirtual’s VirtualIQ and Symantec’s BESR, that time would be cut down to approximately 30 minutes. Considering that 85 percent of servers in the market are non-virtualized and that there are approximately 1.5 to 2 million tape-based enterprise and mid-market customers, there is an opportunity to introduce and leverage virtualization in the context of a disaster recovery solution.

“We believe that this solution will continue in establishing ToutVirtual as the leader and customers’ choice in flexible, integrated solutions for virtualization management,” said Jess Marinez, president of ToutVirtual. “The agreement with Symantec, a vendor agnostic market leader, gives us the ability to also offer high availability and disaster recovery in a fast, cost effective and easy to use package.”

“We are excited about the agreement with ToutVirtual which allows us to offer our Backup Exec System Recovery product to a broader segment of the market,” said Mike Garcia, director of product management for Symantec’s Backup Exec. “Our customers will have access to a vendor agnostic solution that leverages virtualization to drastically reduce business downtime with a solution that installs quickly and easily.” 

VirtualIQ Pro with Symantec Backup Exec System Recovery will be available soon from ToutVirtual and its channel partners.

Add comment July 14th, 2010 Administrator

Virtualization Cause for Organizational Civil War?

When Virtualization Fails to Change the Organizational Culture

Is your organization at war? No I am not talking about businesses that live the methods and techniques defined in the ‘Art of War’ or ‘The Five Rings’ ancient texts. But if you don’t know what the Art of War or The Five Rings refer to, you should. Anyone in business, politics, etc., should have read both texts at some time early in their careers, if for no other reason than to understand what a lot of others have read, and do follow to various degrees. But I digress, so I ask again is your organization at war, with its-self? This is not a goof-ball question, but a serious one. There are organizations that see technology as a necessary evil and others that see it as competitive advantage. For those that see technology as competitive advantage, internal war is not likely, but for those that see technology as nothing but a cost center, the conflicts in technological culture clashing with the competing profit culture, at a strategic as well as tactical level abound.

Virtualization I believe in some ways promotes if not supports conflict in an organization, why you ask? Well, there are a few points that I have recognized over the last 6 or 7 years that provide the foundation for this:

  1. Adds unrecognized complexity to the information technology infrastructure
  2. Promotes somewhat painless initial early adaption, low hanging fruit, but that does not last
  3. Does nothing to resolve hidden issues that already exist, often hides them deeper
  4. Saves cost, by avoidance only, this is often forgotten by management
  5. An opportunity to reconsider business as usual, but often suffers from same
  6. Automation can improve time to market, only if pre-provision effort is done right
  7. Change is frequent, organizations talk change, but talk is cheap, when expected of others

There are of course more than the above list, this list is not exhaustive, just what is often on the tip of my tongue when I am discussing virtualization with others. The same points can be made about clouding or cloud based infrastructures to a reasonable degree, since virtualization is often the foundation for any serious cloud oriented or based infrastructure. The above points illustrate three points of conflict in any organization – service, cost, and mindset. Let us explore these topical points a bit, just to establish common perspective if not understanding.

Service is obvious, clients or customers want more, faster, better, cheaper. This demand is never going to change, no pun intended. The problem is, to get to more, faster; better or even cheaper, things have to change, in a significant way. Clients don’t want or like change. They resist it, deny it, out-right refuse to agree to do it. So without a top management mandate, if your organization is internalized, this is a nasty situation. At this point am I preaching to choir? Sit in on the arguments about physical-to-virtual or virtual-to-virtual such as moving from the original virtualization infrastructure to a cloud based infrastructure, talk about analysis paralysis! My mother a 30 year veteran of corporate management in a fortune 10 firm told me something on my first day of corporate life… The only constant is change. No she did not coin the phrase… Change or die, but she did make the implication, no? With all change real work must be done, expending time and effort. Clients have downsized or dummied down their technological groups, to the point where they have little or no understanding of the technology they help create in the past, and now use in a literal sense as black-boxes and thus lack any realistic capacity to embrace change, of something they no longer understand. Think I am kidding? Consider this, with the 3rd reorganization of any technical group, where 10% of staff is reduced, gone forever, laid off? What percentage of your technical infrastructure is unknown? 30%? Wrong. Cross training and job sharing are not efficient, the realistic number is more like 60%. Yes, 60%, with each organizational restack, knowledge is lost, skill is lost, talent is lost, never mind the rush to stabilize the organization, which means that training, is only 50% effective if done at all? Bingo! That 10% loss of talent is at least 20%. Take this hit for each reorganizational iteration and you are at 60% functional understanding, and this is a conservative estimate. I have seen worse.

Cost, this is the interesting point, that one would think trumps all, but cost is subjective, not objective… whoa… you are saying, how is that true? Well, for example, take System P based infrastructure, versus System X based infrastructure from IBM, which has a very long standing clash for years. With the current and next generation of Nehalem, and incremental improvements of backplanes, memory speed, etc., where System P and System X systems use the same chassis designs and components, the only real difference is the processor and software right? What if I could prove to my customers today, that System X can go toe to toe with System P, and be cheaper to purchase and support, especially at the software license cost, not just hardware? That many applications key to System P are now in the System X space, for less? That hands down System X with very few exceptions at the very top of the System P performance space, is as good or better? Meaning System X costs less than System P? You would say what? Go with System X, right? Wrong! Oh, and loyalty is not the sole issue, the key issue, based on my experience, is the degree of change. Yes, change. Clients hate change, but at the same time demand things that are only possible with change, often radical change. The System X versus System P scenario is just one example, many variants exist, just look for them. And that nasty reality about reorganization skill and talent lost muddies this issue too!

So that leads us to the last point, mindset. Cost is said to trump all, well, that is only true if the evaluation is objective, and real effective action is taken. Moreover, customers want faster, better, cheaper… which has been itemized how many times in this article alone? But change is usually embraced only when it means others change not themselves. How many times has an older system been moved, tweaked, or revamped rather than just ditching the door stop, for a new solution? Not often. Virtualization forced this to happen to a point, but not inclusive to all scenarios, or physical-to-virtual would not exist as a concept. We have explored this above, establishing how demands for change are for others to change not themselves, and how cost savings which everyone often says is a key motivator is not, when talent, skill, time and resources are insufficient. The mindset that often dominates is one of two flavors… the classic autocratic, often term strategic, do it or else, the downhill push idea. Someone from a strategic perspective has stated the direction, and that direction is to be followed, never mind the facts, fine, this is rare. The other motivator is, which is often initiated by the client as a demand, is the competitive or tactical crisis response idea… if we don’t do this and the competition does or already has, we are in real trouble rationale. But is either a positive approach? No. My experience has been that technical management often has no teeth, is doomed to be nothing but gums, so strategic mindset is very rare. Moreover, the tactical mindset is often incomplete, minimalistic, a rushed effort, mergers and acquisitions often fall into this pitfall. Where a decision was made to eliminate some competition or expand a potential market by absorption rather than real competition, and the push to integrate forces a lot of bad or short sighted decisions, the technical organization has to scramble to resolve with half-assed measures or compromises that kill strategic flexibility, often when a solution is already in production! How many articles have you read, where some firm acquired another firm, only to state to shareholders a year later… we integrated, but we did not realize the full cost savings we expected. And, the bottom line looks good this year, but don’t ask us about unrecognized costs, in the coming years to undo what we did wrong this year. Well, duh.

So the question stands… Has virtualization changed your corporate culture? Has it broken down the mindset that dominates your organization? Has it prompted new ways of thinking? Has it, in fact, improved service to your clients? Has it encouraged a culture to accept change?

Add comment June 9th, 2010 Schorschi

Intelligent Workload Management Myth or Mystery?

Virtualization Critical Evaluation – Chapter 15

Ever wonder why various firms have published only parts of their wondrous cloud designs?  The provisioning aspects, the infrastructure aspects, seem obvious no?  PXE, DHCP, TFTP, HTTPS, VLANs, PODs, switches, racks, chassis, blades, servers, remote lights out or control devices, out-of-band devices, etc., stateless operating systems, floating applications, node aware applications, etc. as services, grid application solutions as a primitive forms of cloud services?  The list goes on and on, but in reality everything discussed, is not new.  Not really a significant leap in design or concept.

What is not discussed is the work-load management aspect of the cloud.  In the last few years there have been or are a small number of patents on file that discuss concepts of and unique implications of resource dynamics, autonomous computing infrastructures.  Some describe types of intelligence applied to resource demand needs, availability needs, etc.  This would be the foundation of work-load-management for a cloud to be sure.  I leave it to the reader to do some basic resource on what is published from a patent perspective, but other than the hints of how work-load management might be implemented from a logical design perspective; there is very little information on how it is implemented at a function or tactical level.  Why work load management should be implemented is discussed at various points since the initial days of virtualization theory, or virtualization reality as we know it today.

Moreover, a number of firms have approached aspects of work-load management, usually from a control and reporting perspective that grows into a historical trending, capacity forecasting methodology.  A very few entities have tackled the predictive analysis aspects of work-load management.  No, VMware DRS is not predictive analysis, nor does VMware Capacity IQ have such.  This is not a stab at VMware, but an acknowledgement that even the 800 pound gorilla of virtualization, has trouble with the subject.  Predictive analysis is non-trivial, complex, and difficult to do, and to do it well, the use of a magic wand might be needed, and as far as I am concerned, strongly recommended.  Do disrespect to Wizards Guild intended.

If you want to give yourself a headache, Google workload management, and see what appears, then Google predictive analysis, and then, try to intersect the two topics. Some of the most interesting solutions that I have seen never appear in the joint Google index has returned.  It appears as though the two topics are mutually exclusive?  Ha!  No way.  Note, we are not discussing business analytics or business intelligence here, but cloud computing.  Business or application analytics are rules of logic that drive solution behavior at an application level, ignoring the infrastructure that the application runs on, with an expectation that said infrastructure is homogenous or quite uniform.  The grid technologies, even cluster technologies, and virtualized high-availability models of today are taken for granted by business application analytics, and are quite static compared to what a heterogeneous cloud should be able to do now or in the very near future, no?

So where does that leave us?  With more questions than answers?  Yet again?  Some might expect or decry that I have not rattled of 5 or 10 products, itemizing the good, the bad, or even the ugly aspects of each product?  Rating or even ranting about whatever.  Not this time.  This is a topic that any virtualization architecture, that has any integrity, should learn the old fashion way… do your own research.  That is the only way to appreciate the complexity of the subject, and difficulty of design, and stress to achieve implementation.  Intelligent workload management is a mystery still, but no longer a myth, the functional components exist, and can be leveraged.  However, getting square pegs into round holes requires some original thinking and unique effort.

Add comment May 11th, 2010 Schorschi

Has Virtualization Stalled?

RHEV and Hyper-V Gaining or Losing Ground against VMware?

Has virtualization stalled? It is a straight forward question. But I would say to be fair a trick question. From a customer, or end-user perspective, virtualization has not stalled, in fact, if the percent of adoption is any indication, some organizations are just at 10% virtualization of total targeted server infrastructure, many others even less. Never mind desktop infrastructure. A few surveys I have read focused on 2010, suggest that although some organizations have yet to embrace clouds, virtualization in its own right, is still going strong. For example, BMC surveyed 400 some IT organizations, where just about any solution that has proven to save capital expense, and/or reduce TCO was noted, as important if not critical, including virtualization, but clouding, was not that popular per the results of the survey? If you have read my blog, I believe a couple of months ago I noted clouding is not for faint hearted? The learning curve is steep, the implementation plans complex. It appears to have scared off more than a few.

So, has virtualization stalled? The question is not in doubt from and end-user perspective, the key benefits are still there, faster provisioning, server sprawl avoidance, infrastructure consolidation and improved resource flexibility, incremental stability improvements, etc. Therefore, why am I asking if virtualization has stalled? Because my friends, the 800 pound gorilla, Microsoft, seems to be more interested in Datacenter scope solutions than hypervisors. Is Microsoft really going to take another two years to get the next major release of Hyper-V out the door? Moreover, the Imperial penguin, Red Hat, is taking forever to add basic functions to RHEV or enhance existing functions to even be par with Hyper-V? We are talking about the basics, things Xen perfected years ago? RHEV maybe suited better as a desktop infrastructure solution, yet another nick name for VDI? Maybe, just maybe, the race to establish the next killer hypervisor has been won? Or defaulted to VMware? Has VMware freaked everyone out with the potential of stateless ESXi? As fellow blogger Chris Wolf has noted ( But is that it?

Can VMware establish enough barriers to the competition to hold the high ground for another 5 years with ESXi stateless? What will VMware show off this year, 2010, at VMworld to continue to maintain a strategic advantage for years to come? Come on, it is in Las Vegas, where image is everything right? Will there be lap-dances in the vendor exhibit exposition booths, all flash and no substance, that is the next logical step right, for the vendors in the exposition, right? There were a few hot numbers walking around at VMworld 2009, oh yeah! What happens in Vegas, for the most part stays in Vegas? The gloves or more… are going to come off this time! Maybe it will be a class act, and just the gloves come off? Rather, I hope VMware knocks our socks off, not to mention anything else? Since this is a G-Rated blog, nothing more needs to be said? Would suggesting that virtualization needs a boost? Be just a bit too suggestive at this point? The problem is the cat is out of the bag… ESXi stateless is a known animal to some extent. vCenter of course needs some help to resolve scaling and scope issues, but that as well, is a very tired horse, and is an expected promise from VMware to customers that must happen sooner than later.

The buzz is… that hot-plugging of things will be big this year for virtual instances. Hot-add of CPUs, hot-add of memory, hot-add this, hot-add that, hot, hot, hot, etc., etc., etc. Some are using terms like quick this or quick that. Everyone loves… a quicker solution, cough. Although Microsoft quick, is years away? Of course, this only makes sense of the operating systems support… hot or quick features well. I believe Linux will adapt faster than Windows. However, I believe VMware needs something outrageous, and over the top, something that none of us saw coming! Something that is all VMware, not just repackaged or realigned or repackaged technology! Just like when vMotion was first released, it was geek tech, it was surreal… remember that? It was technology that was an obvious and simple winner, hands down, it was, well… hot-migration, not just quick-migration! Yes, I hot-migration. Now, go take a cold shower, VMworld 2010 is still 6 months away.

3 comments April 28th, 2010 Schorschi

VMware and the Future of Open Source Interconnected?

Where will VMware be in 2015?

Yes, if you have not read the subtitle, stop, and read it. Yes, it really does say 2015! Why does it say 2015? Because the IT industry has blinders on, because the IT industry is so buried, wrapped, fascinated, engrossed, paranoid, and infatuated with Cloud computing, it will dominate thinking for years to come. Everyone is chasing the clouds, busting clouds.

What is cloud computing as a strategic concept? Cost reduction? Resource reduction? Efficiency? Effectiveness? Ease of use, ease of provisioning and deployment? No to all of these at least for now. Ever notice that no one is publishing financial numbers on cloud development and implementation? IBM has even pulled its aggressive cloud oriented commercials that were everywhere a year ago? Why? Because getting a cloud to float skyward, to defy gravity, rather than sit like a rock on the ground, is one step short of magic straight out of a Harry Potter story, and takes real money, lots of real money to make it happen. And that money is hard to find outside of a few Fortune 50 firms given the current economic instability and the squish expectations for the next couple years, no disrespect to the Obama administration, all wearing rose colored glasses. There are more slap-stick-happy vendors hawking capacity, provisioning, and control application frameworks to corporate IT organizations than ever before, and some are making big money at it, if you believe the vendors are not eating their own dog food, by providing clouding enablers or clouding building blocks, not complete cloud solutions, not complete monolithic cloud frameworks. No one vendor can be all things to all customers is the true theme of cloud busting, cough, cloud design, no disrespect to IBM, but IBM cheats, IBM has so many products in so many areas, they are in a unique position, to appear to be one vendor, but at least from my perspective, operation as many individual solutions under one label.

Even Google and Amazon have stated in various ways, an odd speculative silence, by not explaining some aspects of their respective environments, stating competitive advantage or corporate secrets? However, it may be that building a true responsive resource driven dynamic balancing computing organism , took many years to establish, quantify, qualify, and perfect, well if not perfect, at the least a viable construct, with a reasonable expectation of positive ROI plausible at some point in the future. Most of what the rest of the IT industry has seen, is just results of 5 or realistically closer to 10 years of effort. The actual results from top of the line design, enlightened, vaporous dream before concrete substance in architectural engineering and operational implementation of computing. They guessed, and guessed right. It could have gone wrong, very wrong. It could have been a great solution still looking for a problem if, Google had lost the search engine wars, if Amazon had continued to bleed capital as it did from 1995 to 2001 .

Clouding, is expensive, complex, and it eats careers like five year olds going after candy when a piñata is cracked open wide by a broom handle, or even better a baseball bat. I have watched grown men, collapse into mumbling masses of Jell-O before my conference call virtual eyes, at the words… Work-Load-Manager.  Planning to move is one thing, and doing the actual move to cloud based computing is another ( Clouding is an example of easier-said-than-done to be sure.

What does this have to do with Open Source? Well, clouding is incompatible with (classic or traditional) open source! Oh, geez, I can hear the naysayers now… yelling at me, that is just bullocks, cough, bollocks, plain and simple! However, before you go into the garage and look for that jar of tar you have been saving to use on me, and decide to recycle that old feather pillow in a responsible manner. Or think of burning the A Proper Virtual World blog title in 6 foot letters made of good old Southern Pine driven into the ground in my front yard, wearing funny shaped hats popular by crazy radicals about 60 years ago, hear me out.

I am concerned that the loser in this Clash of Titans as it might be objectively classified, not to be confused with the fantasy B class movie of the same name in 1981, will be Open Source. Sure, the pillars of the many cloud solutions will have aspects of components, tools, and such, that where once open-source by origination, but corporate IT does not lay out millions on open-source solutions that do not have at some point a commercial and accountable entity that can be called out on the carpet to respond for bugs, issues, never mind a solid and significant long term strategy for design, development, and implementation to achieve consistent and considerable value for dollar longevity, that the various elements any cloud environment must integrate. For example, DeltaCloud ( is a great idea, but it is not a complete solution. Moreover, unless some commercial entity is in real control, and is successful in getting anyone to adopt it? Ideas like DeltaCloud have a future that is questionable, and that very sense of accountability, ownership, and control that is demanded, defeats the flexibility of open source. So is open source that is managed and driven like a commercial solution still viable as open source? Red Hat for example, has Fedora Linux, it is a flavor of open source but could be classified as an open beta or long lived release candidate platform perfecting features expected to be integrated to the commercial solution, Red Hat Enterprise Linux. Don’t believe it is so, fine, your rose colored glasses are sliding off your nose.

Clouds once implemented will be around for decades, not just years, efficaciously maturing and growing in complexity. The commitment of time, effort, etc., if not energy, does not make sense unless the cloud will be strategic and a stable entity in its own right once alive. Furthermore, in computing terms… geological in speed, in reference to change, clouds will not change integrated components fast. Change in a cloud is more like moves between tectonic plates. No, I am not talking about growth of the cloud scope in reference to capacity, logical enhancements, etc., but change of the actual foundational vendors and solution components, upon which an enterprise scale cloud is built upon. For example, moving from VMware to KVM, or even Hyper-V, it takes a long time.

So, why did I mention VMware in the title? Get your game on, because I am about to crack open the piñata… VMware has a unique opportunity, but will miss it unless a few critical things happen, now, changing the current mindset, so that new ideas are reflected, in the solutions that materialize in the future:

  • First and I hope it is sooner than later, VMware changes vCenter. VMware has hinted at this, but for 2 years of hints nothing has resulted. The enterprise view or federated view? Please that was nothing, from a customer perspective, but an additional plug-in in context. Every single negative comment I heard for the last 2 VMworld sessions, had somewhere and in some part a reference to vCenter and its current as well as past history of performance issues, stability issues at enterprise scale, and the completely horrible performance of the published VMware API (SDK). Note for the record, the term horrible is a not my characterization but those of several developers/publishers either overheard or which volunteered the term as a negative descriptor in informal and unprompted conversation. Many of these developers are also, by chance open source active, very active in some cases.
  • Second VMware has two very real and significant threats coming on fast. Which yes, I have warned of months ago or longer. RedHat RHEV, which is plagued by some real issues, but by 2012 should gain market share fast. RHEV is aimed right at VMware, right at its foundation… ESXi. Microsoft Hyper-V R2 with the Clustered Share Volume (CSV) feature, and almost as good Live Migration in direct comparison to VMotion, only needs to support NAS, I would say specifically NFS, to scare VMware silly. Some still say Hyper-V comes up short, but since VMware has given Microsoft 4 years to get something plausible to the table, 2 more years does not seem that long does it? Just imagine what Red Hat will or could do in 6 years with RHEV? Neither Hyper-V nor RHEV (well KVM at least) are locked into their respective management solutions as the only realistic control surface, to the same extent that ESX and to a further extent ESXi are to poor vCenter.

VMware is not a cloud creator or framework owner, but a cloud component, to think otherwise is nearsighted. Streaming of applications and services will gain ground, the importance of all OSes, will be similar to what the OS is now on cell phones today, something no one even thinks about. So will the hypervisor become over time, if not less significant. Virtual machines will get thinner and lighter because of stateless deployment methods and application streaming, than anything strategic or radical in the design in the next generation hypervisor. In truth, I wish EMC would sell VMware to a cloud component vendor. Such a deal would be one way for VMware to get through the pain of changing its view of its-self. VMware should be focusing on being a very good cloud component technology and nothing more, changing its cost model from that of Neiman Marcus to Wal-Mart, becoming a key, but cost sensitive element of a total cloud solution retargeting as their strategic focus. The days of being all things to all are over for VMware. VMware vSphere was an interesting idea, at least to the VMware marketing brains, but it is already showing its age. Monolithic ownership both horizontal and vertical is a dinosaur in corporate IT, much too heavy for the cloud mindset of the future. Single vendor solutions just are not as flexible as they once were. Maybe corporate IT has outgrown the one vendor, one-point of accountability mindset, lost is zeal for expensive, restrictive frameworks that force customers to follow the vendor roadmap or timeline? Remember what happen to the dinosaurs?

Even if VMware changed direction the very second this blog is read for the first time, it would be 2015 before VMware would realign its strategic direction, and have viable solutions for sale based on the speed at which VMware moves looking at past VMware evidence of change. Wal-Mart, as a high volume low cost organization, is nailing the competition world-wide, is that a lesson that VMware can ignore? Open source will always exist, but the variants of open source that enterprise customers purchase will be in the hands of a few significant commercial firms? Maybe not. VMware will have to be a leaner, meaner, organization in 2015 to survive if not hold market share. Focusing on a strong and well understood niche would allow VMware to combat RHEV and Hyper-V toe to toe. Undercutting the most significant advantage that RHEV and Hyper-V have, less cost than VMware.

Does VMware want to mature like Apple Computer? Have great solutions, but only about 20% of the market? Apple may own the mobile music market, but not the mobile phone, or laptop market. Is total cloud ownership realistic for VMware? That is what VMware tried to do.  Don’t let anyone tell you different. The vSphere concept was aimed at cloud ownership, to position vCenter as the authority of a cloud, maturing into the dominate entity in the cloud. Anyone that says otherwise is not seeing big picture from the VMware perspective over the last year, and the next few years. Unfortunately vCenter its-self, never mind the published VMware API/SDK, is not sufficient to interconnect even the few vCenter enhancement/add-on vendors needed to establish the most basic of clouds. I think VMware thought they would have more time to get vCenter right.

Nope. So, vCenter becomes little more than a communication proxy driven by force, top down, since no other option is possible, if ESXi or ESX is to be used. This leaves a wide open opportunity for any other hypervisor to improve its interoperability within a cloud. In a sense, vCenter has not protected VMware, nor has it established a solid foundation for VMware to move into a cloud ownership position, but backfired on VMware.

1 comment March 30th, 2010 Schorschi

Previous Posts