FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
PowerPath for path failover (the pre-requisite for Federated Live Migration) is free with the purchase of a VMAX or VNX.
As I said, FLM also supports native MPIO and Veritas DMP, thus PowerPath is not required for non-disruptive relocations.
(For its advanced functions, like dynamic load balancing across up to 16 ports, PowerPath is licensed per host, and is usually sold under Enterprise License Agreements for volume discounts).
Meanwhile, relocating LUNs from one IO Group to another non-disruptively is supported only on two host operating systems, and not even IBM's own enterprise server platforms.
And it is not even possible to have a host connected to the same LUN(s) on two separate IO groups - a native feature of both VMAX and VPLEX for maximum HA and resiliency.
This all just demonstrates once again the significant benefits that best-of-breed solution suppliers (such as EMC) deliver. We only do storage, and we do it better than anyone else.
So much for Tony's one-stop-shopping argument...
How much does PowerPath cost per port?
You need to read the specs a little closer: VMAX 40K has 2x 6 core Westmere 2.887GHz (3.066 Turbo) on a PCIe Gen2 per node/director. 4x6 core per engine, 32x6core (and 8 PCIe2's) in a full system. And the numbers are cache MISS.
2 simple points about migrating across IO Groups: 1) Neither VMAX nor VPLEX need anything special to do this - ANY LUN can be exported on ANY port (or EVERY port) on EVERY engine. No need to hack around with multi-pathing or brandy-new SVC features.
2) It is an IBM tech note that states that ONLY the 2 Linux versions (and nothing else) listed on the published qualification matrix are currently supported by SVC non-disruptive movement.
VMAX Federated Live Migration allows non-disruptive move between separate arrays, and doesn't require ANY changes to host multi-path drivers. Instead, our arrays are smart enough to understand which multi-pathing each host is using, and then to adapt itself to handle the particular implementation. Currently supports PowerPath on ANY host, plus DMP and native MPIO on almost all hosts. Even supports SCSI-2 & SCSI-3 reservation clusters...
If you require hosts to update their software in order to use your non-disruptive features, it's not really non-disruptive, is it?
Ours works; yours has quite a ways to go before you start bragging about it.
You've only got 4x 2 way Intel Westmere(?) CPU in your Vmax, So at about 6GB/s per DDR3 channel, and limited by the QPI on that generation, I can see about 48GB/s cache hit... at 100% memory efficiency...
As for migration, its quite sad that (IBM included) the open source multi-path OS implementations work, and those you pay for don't always - with AIX we found an edge case that we want fixed before releasing full GA support - support for those other OS will be coming when they fix their problems... ;)
Thanks for your comments, but I have to admit I am not aware of any competitor claiming 56Gb/s of 100% cache miss bandwidth or 2M+ cache miss I/O per second. And indeed, we routinely demonstrate our superior response times vs. VSP.
But then again, since VSP enables silent data corruption for externally virtualized capacity, it really doesn't matter who is faster, now does it?
Active/Active maybe, but lose a node in an I/O group, and you are no longer HA - unlike VPLEX.
Non-disruptive migration across I/O groups, except according to your latest customer bulletins, only supported on two variants of Linux, and not on Windows, AIX, Solaris, HP/UX, nor VMware or Xen.
And 53GB/s miss is actually an engineering number - real, live measured by your equivalent performance specialists in our labs...stacked up against similar claims by Hitachi and IBM (we actually tested MORE miss bandwidth on the VSP than Hitachi claims in their documentation).
I think you'll find SVC/V7000 has been true active/active since 2009. And with 6.4 we now have the ability to migrate volumes between IO groups without disruption, and the ability to present volumes through all IO groups, so true N-way access.
Meanwhile, 53+GB/s miss bandwidth - thats some serious number of SAS ports (or are you still stuck on FC-AL disks) - thats a marketing number, much like 296PB HDS... whats the real miss (from disk read IOPs number) or will EMC continue to hide behind those amazing performance graphs that show no scale on the y-axis - gotta love how you get away with that:)
I really love this post. It is so typical EMC style - lots of words and very less sense. With Invista, you guys said a switch based virtualization is the way to go forward. I am really wondering what Invista customers must be going thru right now. Then you guys said V-Plex is the way to go forward, and going by your past, I am sure already it is on it's deathbed. Now with FTS, again you guys are saying that this is the way to go. But then the 40K just seems to be a box which can also do virtualization. You have to offer it for free cause you are late in the game. Coming to your differentiators - lets talk about the performance numbers ou have mentioned - 53GB/s and 2M IOPs. Just glance at all your competitiors specs and you will realise that they already have more than the cache miss bandwidth you have mentioned. As regards 2M IOPs, I guess we just have to believe you as EMC does not do any public benchmarks. Then you again put a meaningless figure of 45 microseconds from FTS/VPLEX to external storage. When will EMC start talking about numbers that really matter to a customer - for example, the overall response time/IOPs/throughput in customer relevant environments and for specific configurations???
"Quite" - as I said, growing faster than SVC did in its early days. You guys where pretty boistrous back then about your growth rates...well, we're quietly exceeding them today.
Interop - VPLEX already supports 42+ different platforms. For FTS, just a matter of running everything we already know through the paces. Customer qualifications have added 17+ arrays since GA, so not to worry. Oddly, the current hottest field request is for us to qualify SVC behind FTS - seems that more than a few customers are looking for an easy way to migrate OFF of your product. You probably don't see those unhappy customers who have learned the realities of your not-so-active/active, definitively non-HA solution and realized that they need more.
And that both VMAX and VPLEX can deliver what SVC cannot.
Catch-up? In numbers deployed, maybe, but not in features or capabilities. With your 8 year lead, you've still not delivered end-to-end data integrity OR truly Active/Active. The lack of serious competition has bred complacency in both IBM SVC and Hitachi UVM.
Here's your wake-up call.
As to differentiators for FTS? Umm...where should I begin?
TimeFinder, SRDF, FAST VP with an 8MB granularity over 128,000 devices and 4PB+ usable capacity, VMware+HyperV+XEN integration, RecoverPoint, Federated Live Migration, 53+ GB/s cache miss bandwidth, 2M+ IOPS, less than 45 microsecond I/O overhead for both FTS and VPLEX to external storage?
Oh, and did I mention that VPLEX is the *ONLY* solution certified by Oracle to enable stretch Oracle RAC clusters to over 100KM, with full HA and witness coordination?
Consider the ante raised by an order of magnitude (or more).
"quite" is the keyword. Not seeing it come up as a real competitor in the field. Lots of "the roadmap looks good" style hype, but when bit comes to byte... Active active HA at distance, is it really that active? watch this space...
Interop. Good luck with that. Learning, and real customer use are very different things as I;m sure you'd agreed.
Anyway, maybe we can concentrate on differentiators, now that you've realised/agreed we had the right architecture in the first place, and for once you are playing catch up ;) Look forward to sparring in the future, been quiet out here with your absence.
Thanks, Barry - I think ;-)
Seriously, VPLEX is doing quite well, thanks for asking - growing at a rate faster than did SVC in its first years. Winning lots of deals, in fact, where SVC doesn't work - like providing Active/Active HA over distance. Seems the SVC split cluster is no longer HA when either of the nodes fail, while VPLEX Metro remains fully HA at the surviving site. It's just one of those Achilles's heel deficiencies of SVC that gets people looking for a better solution.
And it's not like we have to reinvent Interop testing - we're just redeploying what we've learned over the past 10 years into two platforms instead of one.
Congrats on finally seeing the light, but I guess the U turn is due to the fact that both Invista and VPlex haven't been anywhere near as successful as hope. Bundle it with the big iron and maybe someone will use it - a bit like HDS ...
Anyway, good luck with that interop catchup - being first (and industry leading) does have one advantage 10+ years of interop testing you need to catch up on - unless you plan to use SVC as the gateway to your support as I've seen in the very rare case where I've come across Vplex.
Ken, Good read on that paper However I've always seen storage virtualization useful for a few things. Archival (where generally I've got some type of verification or checksum's built ito whatever's throwing the data there) or the big reason being for data migrations (Migrating 2PB of data into a frame without FTS would be less than exciting). I'm not sure of the utility of pushing Tier 1, or long term primary storage for storage virtualization (and never was on any of the other platforms I've used it on) as the hardware support costs start to get ridiculous with most vendors. The market will decide this and 24 months from now it will be interesting to see how many people are really stretching 4-6 year old frames along behind anyone's engines (Maybe HDS/IBM/Netapp could talk about their experiences with people keeping old power hungry slower clunkers around). Adding all the whizz bang features (VAAI/Replication) is nice, but Until someone has a means to make Netapp and HP renewals on 5 year old arrays not ridiculous, I'm curious how far virtualization will go beyond the traditional uses of Migrate, keep old system around as VTL/Backup Target/Tier 2 till budget free's up.
Doesn't Netapp's V-Series do checksum's and verification on 3rd party Arrays? I Thought Datacore did this also with their NMV volumes on external Tiers.
As Barry implies, it doesn't matter who was "first", it only matters who is "best" in the present tense at any given point in time. If anyone doubts that the silent data corruption issue is both real and critically important (and if you really want to scare yourself!!!), please use your favorite search engine and look for "CERN" plus "silent data corruption" and you will find both what I consider to be the definitive independent and credible white paper on the subject, as well as independent analyst and press commentary on what CERN found in their actual physical tests and analysis. The reason that this matters is that while EMC storage products (like Symmetrix VMAX and VNX) have always had features that provide deep and persistent data integrity protection, amazingly today ther are some MAJOR storage products from some MAJOR storage vendors that either don't provide persistent data integrity protection at all, or don't provide full end-to-end integrity protection (which is logically the same as NOT providing integrity checking). Simple positioning - EMC Symmetrix VMAX Federated Tiered Storage provides this level of persisent integrity protection even for those (non-EMC) storage products that don't do it themselves, and the "other" storage-based virtualization technology does not. So the question is - do you want confidence that the data that you write is always going to be the data that you retrieve, or are you OK with the very-real and probable risk (read the CERN paper!) of data corruption?
Its stuff like this that I point to when people ask why I call the entire Tier 1 storage Industry a multi-billion dollar barking Carnival.
When I heard EMC was going to do this I swung by the blog hoping you were coming out swinging as always (if you had settled for a crow pie it wouldn't have been the same old crazy Barry, I expected). While I honestly don't do a huge amount of storage virtualization I find it confusing that Checksumming is viewed as the magic bullet that EMC brings to the table here? Given a choice between Is there a reason why you wouldn't just trust the virtualized array's block level verification? If you virtualize a VMAX or Netapp behind a VSP why wouldn't you just offload/have the virtualized array do its own checksumming? Is there some giant performance advantage from doing that centrally rather than on the 3rd party array? If your virtualizing Arrays were the SP's are overloaded I guess this could help, but I'm not really seeing the smoking gun here, especially when your arguing that it is superior to have this than a Full HCL . - Eagerly awaiting more rants, marketing, and hilarious back and froths with HU.
Tach - indeed I remember, and I am glad you find my post so hilarious. Coincidentally, I met with DB yesterday to explore FTS, and they didn't seem to care that we had changed our strategy 180 degrees...
But you know, what is pretty funny to ME is that the Cult of HItachi's responses to FTS are so childishly focused on EMC's historical anti-array-based virtualization position. Yet the Cult offers no position on the abject lack of data integrity validation in UVM.
Oh, and for the record, Dave D. Went to HP shortly after that meeting with DB. Our strategy changed to Federated Storage using both VPLEX and VMAX later in 2009, after he left.
Well said, Scott!
Barry, do you remember the EBC for Deutsche Bank in 2009, where both Dave Donatelli and yourself were bashing HDS virtualization strategy and even depicted a USP-V with external arrays virtualized with a big red circle and line through it?
I read this article and must say that I find it amusing that EMC has changed it's approach to storage virtualization going from San based technologies (Invista / VPlex) to a controller based model such as used for the last 5 years by HDS.
I was at one of your EBC presentations in 2010 and you yourself said that controller based virtualization was not the way EMC was going to use this technology. Your direction was VPLEX.
I too was excited about VPlex but after my experiences with it in several customer sites found it to lack the scaling needed to virtualize and entire DC. It was more in line with SVC and creates Islands of virtualized storage.
Today the VSP supports EMC, XIV, and other storage vendors and HDS RECOMMENDS using HDT with the virtualized storage. It can support up to 255 PB of virtualized storage.
I have worked for both EMC and HDS and I applaud EMC for following HDS and recognizing that they had it right from day one.
As you can imagine from the tone of my comments I spend most of my consulting hours with HDS these days and focus the bulk of my efforts on virtualizing HDS and third party storage behind the VSP.
I just find it very humorous that your are trying to take shots at a company who's technology you are trying to copy.
You made the right move, you now just have to catch up to them, and they have a 5 year head start.
Storage Anarchist, this must be the funniest article I've read in a long time. Not only does EMC have great marketing, they do also employ comedians like yourself. You might was well include something like: "FTS, the iPad killer"
Thanks Barry the explanation about how FAST and VFCache work together is a great one, and the future looks even brighter. I am also seeing a lot of opportunities for our Partner when coupling VFCache with, maybe, VMAXe and their skills around SQL and Oracle !
Thanks for the comment Alexios, but I suspect that Hu would have mentioned the existence of larger USPV/VSP's in his response to this post.
Instead, he only said that Hitachi wasn't in a competition with EMC to build the largest arrays.
So while I too honestly don't KNOW whether there are larger USPV/VSPs, I do know that nobody from HDS has presented evidence thereof.
I just read this now, and went over to Hu's blog to read his. I have to correct a mistake here - Hu was showing the largest USPV/VSP externally attached storage capacity - he never made any claims about the largest internal storage arrays.
I honestly do not know whether the top 10 USPV/VSP internal storage arrays have as much storage as what you've listed.
Wonder when VSP will star in movies like "Repoman" or "Mad Max", due to the design appeal :)
Good to see that the HDS v EMC big frame debate still hasn't gone out of style.
I think your data is great but I have to agree with Soikki here - customers just don't trust fun, colorful graphs. They are a lot more savvy and they want to know details because their reputations and jobs depend on real data.
Slides with hesitation to provide details can make for a relationship of distrust. Believe me, the baseless assertions that Hu Yoshida makes also sow the seeds of distrust (much like your configuration detail-less graph).
You are usually a bit more transparent in your posts... C'mon, share your data!, step up and don't hide behind a cute little "YMMV". Take a risk and trust that the storage world will understand and decide for themselves. You've always been a leader, don't become another Hu :(
Exciting stuff here. I can definitely see this filling a huge need. Can't wait to get more details.
Hi Kevin, I think that's exactly the point here, with FAST VP there is potenitally radical positive improvement in both CAPEX and OPEX compared to not just conventional configurations but also alternative tiering products - this is NOT "your father's Symmetrix"...
VMAX is a whole other discussion but more around the benefit vs cost and complexity. I was a 12 year Symm guy and as robust as the architecture was, unless you have a application that needs the uptime and can justify the cost it doesn't make sense.
I'm not a VNX guy - these comparisons are to VMAX FAST VP and compeitive offerings in the enterprise storage space.
However, my understanding is that while FAST VP has some overhead on VNX, using FAST Cache with and/or without FAST VP affords VNX customers with significant performance benefits with minimal overhead.
Bias is an understatement. You mention nothing about cost or do you bring up the performance implications of using FAST with VNX. The added CPU cycles to track the meta data now matter what size can be upwards of 30%. If you are running 20% on your CPUs another 30 is no problem but no customer can afford to run there systems at 20%. If you go beyond that you you loose the ability to do non disruptive upgrades, which again most customers have requirements of 24x7 with minimal to no downtime. Also what are you really getting from tiering? Space savings? I know it's not performance because EMC states FAST is not implemented for performance, although in some environments where the storage has been designed incorrectly may benefit. Just my two cents.
Unfortunately, there are no standardized tests that are designed to demonstrate the operational impacts of automated tiering. Most standardized benchmarks run for a limited time to pre-warm the caches, then execute for a relatively short time, and then report elapsed time and IOPS/MBs averages across the execution period.
Due to the limitations of competitive auto tiering products (like Hitachi HDT and IBM EasyTier), a standardized test would have to be designed to run for days, and to have a dynamic working set that is representative of the real world and that is larger than available cache+flash, and the test has to be easily repeatable.
Neither SpecFS nor SPC tests fit these requirements.
How we create such a test is proprietary information, but the effective workload looks like this:
20% 8K read hit
45% 8K read miss
15% 8K random write
10% 64K seq read
10% 64K seq write
I originally presented these results to challenge the baseless assertion made by Hu Yoshida that VSP could handle the added effort of auto-tiering without impacting performance, while other arrays cannot. Readers can accept my assertion that the tests were in fact fairly executed or not, but at least I have tried to back up my contrarian position with data - data that I feel is fair enough to stake my reputation upon.
As always, YMMV.
Thanks for commenting the comments :)
However, I must disagree still with you not sharing any configuration details. If a vendor shows a graph comparing their competitor but refuses to share details of the comparison, it is meaningless.
Through the years I have seen similar FUD and graphs from EMC comparing other vendors, and constantly the thing missing is configuration info.
No detailed info = meaningless graphs.
I think that if your graphs here would really be the truth, you would make the spec -tests and shout big time (as with the Celerra). Accept the challenge? Or share us the information I requested earlier. We, customers, are very eager to get accurate information and performance measures.
By the way, Soikki - it doesn't help VSP response times when writes to SATA drives must be verified by reading the data back into the array and comparing to the original data, as is the prerequisite to use SATA with HDT.
VMAX uses more more efficient and effective error detection/prevention for all drive types (including SATA).
But even after auto-tiering optimization, where presumably most of the writes have been moved OFF of the SATA drives, the VSP exhibits the response time deficiency vs VMAX that Hitachi arrays have shown ever since the first Lightning.
You'll have to take my word for it that the test configuration was the same for both platforms - our performance engineering team is stricter than most customers in performing apples-to-apples comparisons.
And this difference in response time is no surprise, nor is it even new - Symmetrix has long held a significant response time advantage in OLTP-type workloads. Our sales teams use this to their advantage every day.
And indeed, the FAST VP response times did double - but they also remained far below the impact seen with HDT, and the impact lasts for a far shorter time.
And if the impact is too much for a VMAX customer, it can be dialed back by reducing the aggressiveness of the relocations...while HDT offers no such tunability.
When you say "identically configured arrays", would you spare us the details please? Disk types, amount, array cache etc? Is everything done correctly on host; lvm, load balancing...
I find it odd that VSP's response times are so much worse than V-Max here. It casts a small shade of doubt on the graph. You are claiming here that with identical config, VSP's response times are almost double than V-Max.
If this would be the case, I'm quite sure that you would have trumpeted this much earlier and much louder.
BTW, you note here that "
6.HDT response times more than double during page relocation". Did you note that the same happened for FAST as well?
HDT users can specify a monitoring window only when using their 24 hour cycle. For example, monitor from 9-5 and the movement begins at 5:00.
One of the reasons we have seen less than expected performance from the VSP with HDT is that we are forced to configure our SATA tier with write-verify mode, which causes every write to SATA to be followed by a read to confirm the correct data made it to the drive. I don't know why this is a requirement but it's not ideal for performance.
Good to hear from you again!!!
And your comments do help explain why IBM SimpleTier is so inefficient.
If you focus only on getting performance out of SSD, you wind up with a very expensive solution - especially on the DS8K where the minimum # of SSDs is 16!
Including lower $/GB SATA in the solution enables FAST VP to cost less while improving performance.
Similarly, supporting only 2 tiers limits applicability of the implementation to applications whose working sets remain constant continually. Many (most?) enterprise applications' working sets will change quickly and often - day-time transactions and nightly batch runs, for example. During the batch job, the working set will likely exceed available SSD, but be far too slow if served off of SATA...hence the "middle tier" of 10K or 15K HDDs.
IBM SimpleTier's extent size is always larger than FAST VP (SVC, V7000 and DS8K), and thus is always moving more "cold" data with every bit of "hot" data it promotes...very wasteful of an expensive resource.
Finally, since SimpleTier defines the ratio of Flash+HDD at the pool level, all applications must compete for the same resources. With FAST VP, each application can have its own policy and priority, allowing customers to optimize the most important applications, even as they share pools for maximum utilization.
Easy Tier on SVC and V7000 is driven by the pool extent size, so is configurable between 16MB and 8GB.
WRT more than two tiers... the initial advantage of a function like EasyTier is its ability to make best use of SSD. That is, we have a very expensive ($/GB) medium that provides massive performance (IOP/GB) therefore the best way to use SSD is to spread the IOP over an entire enterprise.
HLM, i.e. archiving cold data, yes that useful for making use of SATA.
However, if we'd all thought that the 2, 3x difference between 15K and 7.2K meant we needed tiering, wouldnt we all have implemented EasyTier, FAST or VP etc 10 years ago???
Thats not to say that multi-tiering cant help, but EasyTier is aimed at gaining benefits from SSD.
I'd be very curious to have you do some in-depth comparison on Compellent's tiering (can handle shortstroking, etc.) vs. FAST on VMAX.
Realizing of course that Compellent and VMAX don't really compete in the same space...
Caching is not tiering, and there can be no doubt that the two are complementary - Symmetrix uses a very large DRAM cache as its foundation, and tiering yields meaningful added performance benefits to VMAX.
For another example, you can see the benefits of combining Flash-based cache with Flash-based tiering in this blog post about a CLARiiON customer's experiences.
Two important things to consider when comparing tiering and caching:
First, caching requires ADDITIONAL Flash capacity, while Tiering does not. With caching, there will always be two copies of date in the cache, while with tiering there is only the one.
Second, and of particular distress to NetApp users, caching writes to flash is not always supported, where tiering (by definition) supports both reads and writes to every tier. CLARiiON does cache writes to FASTCache, NetApp does not.
There has been some speculation that the flash devices NetApp uses are not well suited for mixed read/write workloads, and that they are particularly poor at handling large block sequential writes (WAFL turns random writes into large block sequential).
The flash devices that EMC uses in VMAX, CLARiiON and VNX are specifically selected to handle mixed read/write random and sequential workloads.
I notice that you don't mention NetApp's Virtual Storate Tiering, perhaps because they're not really tiering but instead caching?
What are your thoughts on this model? Is tiering still relevant if you have lots of flash cache?
Tomer - thank you for making my point.
Kaneda San - Welcome to my blog. I try to mix it up here - sometimes expressing strong support for my company's products, sometimes pointing out the weaknesses of competitors' products/marketing/FUD, and sometimes just trying to enlighten readers with perspectives on relevant topics so that they (and you) can make better informed decisions. I hope you like it (not everyone does).
I understand what you're saying, but your ending tagline reminds me so much of Fox News: "We Report, You Decide!"
So, in spec you have the create phase, which creates all the fileset, the warmup, which gives on the chance to cache some data ( assuming that you can't cache all the data in the create phase) and only then the actual run. So, the chances that one would be able to cache the data is really small ( and this is usually the case in spec based on my experience).
So, data caching is rare. That what I meant about spec trying to avoid DATA cache hits.
When it comes to metadate ( which is most of the ops in spec) its a different story. And since AFAIR, only 30% or so are actual data ops ( read/write)this is the part that will cause higher latency.
So, it might be the case the read/write ops comes up with 20ms latency, but the rest of the 70% of the workload gives much better response time.
Unfortunately, spec disclosure don't show latency per of type ( you do have it in the spec run log)
Another piece of info that spec disclosure don't reveal is pricing information...
So, how much an SSD based system cost? I guess a lot. But it will surely provide much better response time on most ops.
Anyone who has run SPECsfs2008 knows that the actual working set of the benchmark is a fraction of the total fileset, and that indeed the benchmark is VERY cache friendly - if your cache is large enough, that is.
There is no other way that you could ever attain response times less than the 6ms nominal response time of a 15K rpm hard drive if cache were not playing a SIGNIFICANT role in buffering both reads AND writes...
Luckily, spec report also shows that "fileset size" - which means how much data spec used.
In the IBM case, its 49805.3 GB.
Last time I've checked its a bit bigger then 1.4T of cache.
So... data can't be fully cached ( that's the main purpose of specSFS - don't be cache friendly !!!)
Would it be fare to assume that at the lowest I/O rate all I/Os are served from cache? If this is the case, IBM's response time is 1.5ms vs. VNX' 0.4ms. So perhaps the correct answer is that - comparing apples to apples - VNX is almost x4 faster.
Coming front the world of clustered file systems, specSFS seems to be the standard benchmark that is easily available to access and is the mostly widely abused benchmark of all times. Now before the spec guys get upset with me, the problem isnât with the specSFS tool-set, the problem is with the vendors who use it (I am guilty of this vendor deception personally).
Umm...and just who do you think is going to believe that 1997 SPECsfs97_R1.v3 results are in any way comparable to 2008 SPECsfs2008_nfs.v3 results?
Come on, NTAP - you're going to have to do better than that!
Which is fastest? This one:
Come on, guys. Don't bother submitting something that is twice as slow. When you reach 927,000 at 2.7ms, give us a call!
All good points.
However .. my biggest beef with spec sfs - since the days it was called LADDIS ! - is the absence of cost metrics.
I really would like to see a metrics of price per IOPS. I understand why it is politically hard to do, but TPC somehow managed to make it happen.
Thanks for the feedback - I am sorry that you did not find my blog interesting or informative.
Why are your blogs basically EMC sales FUD ? There is nothing unbiased therefore nothing worth reading. You do not appreciate that other vendors are capable of original thought or that EMC has to build on 10 years of legacy code for each new function.
I never read your blogs because of this but I followed a link from another site. Blogs should be interesting not sales.
Just for clarification... CLARiiON and VNX both reclaim zero blocks in 8KB increments out of the 1GB pages that we allocate out of the VP pools. This reclaimed space can then be utilized by any other LUNs using the same pool, and will be filled before new 1GB pages are claimed.
Thanks for the shout-out! I find it amazingly hard to locate VAAI support information, and just wanted to consolidate it somehow.
I also am amazed at how each VAAI implementation is different based on array capabilities. The VMAX and 3PAR arrays seem much better-suited to block zeroing than others, including the HDS line and EMC's own Clariion. I wonder how the VSP interacts with bulk copy and hardware-assisted locking. Gotta ask Hu...
Development is working on simplifying this, but yes, in the meantime, your approach will work.
If the RPQ does not pan out, which it should, then you can still use the traditional Open Replicator with Hot Pull and Donor Update option. The only downtime required for this type of migration is the time it takes to shutdown a host, switch the masking, then boot it up again.
Very cool stuff. I am working on a design for a FAST VP deployment with a customer and had a question about the granularity of control. You state that you can assign a policy to a device, but the interface only allows for assignment to a Storage Group. Since most of the Masking Views in use for my customers contain devices that they would not want under FAST VP control is it preferable then to create a Storage Group that is just used for FAST VP Policy assignment so that we can have fine control over which devices are assigned to which policies?
Many of your concerns are a simple matter that the qualifications have not completed for the combinations you list. I suggest you submit an RPQ before abandoning hope.
As my organisation is just about to go through a DMX4-VMAX migration I was quite excited to read about FLM.
I had a look at the FLM documentation on PowerLink and it appears there is a bit of a gap between the statement "FLM is a peer-to-peer solution that literally orchestrates the LIVE transfer of LUNs from old arrays to a new one requiring nothing more than a qualified multipath driver installed on the host " and the reality of the Simple Support Matrix (300-000-166 REV A02).
Cluster volumes are not supported (SCSI3 reservation issues).
Boot volumes are not supported.
Windows 2003R2 only supported with a single specific Emulex or QLogic HBA.
Windows 2008 or Windows 2008R2 are not supported.
As this takes out most of my environment unfortunately itâs not quite the panacea I initially thought it might be. Disappointing for this customer considering its being called out as such a major new feature.
Disclaimer (I'm ex-EMC) and agree with most of what Barry has written. However, I do think customers find the following of interest with the VSP:
1. Density found via the use of 2.5" SAS drives.
2. The use of standard 19" racks.
3. The ability to buy a diskless VSP.
Looking forward to NYC on the 17th / 18th and would be happy to chat about our Symmetrix / VPLEX business and trends we are seeing in the Tier 1 marketplace.
I was going to ask you if the Beach House was on the market after I saw the listing on Facebook.
And congrats Grandpa. :-D
Very glad to hear you are not retiring! Congratulations on the very happy news about your expanding family! Given your love of photography I am sure they will have a very well documented childhood!
Happy Holidays and see you in the New Year!
@Josh et al:
I have so many problems with the spin and propaganda that Barry churns out (heck, it's EMC, what would you expect). I especially have problems with the outright misleading conjecture around XIV. One day I'll post something on this and really get into it with him (yes, I represent the XIV product).
BUT I am posting on this thread to give him some props. He didn't censor me in nasty posts prior when he easily could have -
Please visit this older post to see: http://thestorageanarchist.typepad.com/weblog/2010/06/3004-tell-them-why-an-idea-worth-spreading.html
@JoshKrischer: Cereva went belly up in mid 2002, EMC picking up the IP in July/August. I followed this closely as I was working for a competitor (Rhapsody Networks). DMX was introduced the following February.
If you think that they took the IP and folded into a product in 6 months or less, I'm dumbfounded.
I am listening to everyone but as an analyst I have to build my opinion based on available and filtered information.
I can quote a much respected analyst which wrote (un-solicited) in LinkedIn:
Vice President and Distinguished Analyst , Gartner Inc.
âJosh Krischer demonstrates a knowledge of the server and storage markets that is nothing less than encyclopedic. He is blunt, honest, uncompromising and often irascible - but very rarely wrong!â February 27, 2006
Will advise you to read my profile and some other testimonies in LinkedIn.
My comment did not mean to offend you, my apologies, your insistence, however, is sometimes irritating; everyone is free to have their opinions, but to question the opinions of others is not always correct. I'm sure you would have much to teach, but I think a good teacher, first of all, should be able to listen to others.
All the best.
If you say so...
Just for the records:
OttoroLuk is Stefano Panigada
Advisory Technical Consultant at EMC in, Milan Area, Italy
Josh, I rarely censor comments on my blog, and never when they are voicing a person's opinion.
Just as you are entitled to voice your opinions (and thus I did not censor them), so too is anyone else entitled to voice theirs.
Brave OttoroluK, who are you? How dare you to write this without identifying yourself. What a "bravery" and deep insights. I will be happy to challenge you on everything.
Barry, you may shame yourself to publish this comment. We were never friends but I always had some appreciation for you. Itâs gone.
The real question here is: Is there still someone out there listening to this HDS-Fan Moron?...
It is amazing that that every time that I point to inaccuracies in their blogs, EMC bloggers instead of delivering sound answers are attacking my credibility. It seems that EMC culture is not mature to keep discussions on technical facts without going personal.
1. The patent which you mentioned is for the Bus structured Symmetrix not for DMX.
2. The meeting with M. Yanai was an official meeting on EMC roadmap organized by Steve Bardige, the head of EMC Analyst relations who also took part in the meeting.
3. You answer is marketing stuff without any âbeefâ
1. What you both say is EMC developed the DMX without the knowledge of the head of engineering or you EMC developed the DMX in less than one year time. Who, with some technical understanding do you think will believe that?
2. In 2002 I pointed that the Symmetrix is aged design and its bandwidth is too small to support large capacity. EMC reaction was sending lawyer letter to Gartner management. Let me to quote from Storage Anarchist blog: July 03, 2008, 1.015: stranger danger: âFrom the outset, Symmetrix DMX has contradicted many of the design tenets that Moshe seems to embrace even still today. Some will even say that the Symm 5 (the 8000 series) embodiment of those design tenets was no longer competitive in 2001 or 2002, and that Symmetrix fell behind the competition and nearly cost EMC it's market-leading positionâ. Same conclusion but 6 years later.
3. EMC complained xx times to Gartner about my writing but was not even once to prove that it was wrong. I invite you to read my Research Notes looking for wrong information. The only two times which I was wrong was because I was deliberately misled by EMC; GDPS support by IT Austria and SRDF/A performance.
4. Never in my life have I refused to speak with any reference customer! I cannot recall any call that we had in the past. Please refresh my memory.
5. I am not biased against EMC; in fact one of my customers is deploying VMware in his main and DR data centers following my recommendation. I can give the name if you want, large storage RFP in Israel, etc.
6. Using the same logic it is unethical for any vendor employed bloggers to write about his competition, and in particular inaccuracies and FUD.
7. âI see you taking a consistent pattern of stating strong positions (always against EMC) and refusing to hear or explore contradictory facts.â Sorry, are you blind?
10 days ago, during a lunch on IBM event, some analyst from Gartner, and Forrester discussed this blog. The general opinion was that it is waste of time to read it. Think about this.
I will take their advise and stop to waste my time, you are not worth it.
Josh, I've been in and out of the thick of all of this for the last 16 years, and as Barry states, your assertions really are unfounded. I also certainly can't speak for whatever Moshe Yanai might have said to you, but I do know what we were building, and when, and the facts contradict your statements. Your agenda for presenting fantasy and ignoring hard facts appears to be pretty obvious, and I've personally seen you do it before in a conversation that I had with you as part of a larger group meeting back in 2002 that you were dialed in to when you refused to explore a suggestion that I had made to have discussions with some very specific customers whose hard experiences contradicted a position that you had taken in one of your notes. I see you taking a consistent pattern of stating strong positions (always against EMC) and refusing to hear or explore contradictory facts.
I know personally the engineers who wrote this patent and who architected DMX, and this is indeed the core patent behind it.
As to what Moshe told you, you'll have to ask him.
Your bias against EMC and your imagined reality serve only to undermine your credibility and is an insult to the good men and women who invented Symmetrix and DMX.
And with that, I'm done - I'm not going to play this game with you any more. If you want the last word, fire away.
This patent was initially issued in 1988 (and updated several times, the last update by Ofer Erez in 2000 close to the end of life of the original Sym) for the original bus-based Symmetrix, see the title:"System for interfacing a data storage system to a host utilizing a plurality of busses for carrying end-user data and a separate bus for carrying interface state data" and not for matrix based DMX.
This patent was initially issued in 1988 (and updated several times, the last update by Ofer Erez in 2000 close to the end of life of the original Sym) for the original bus-based Symmetrix, see the title:"System for interfacing a data storage system to a host utilizing a plurality of busses for carrying end-user data and a separate bus for carrying interface state data" and not for matrix based DMX.
In 2001 the road map of EMC (according to Moshe) was adding more busses and increasing the bandwidth of the busses or you want to say that EMC Senior VP cheated Gartner analyst by providing wrong information.
Did EMC had an underground development division, hiding information from the head of Engineering? It is hard to believe!
Thank you for taking the time to comment.
Your assertions of market share are not supported by the facts of Storage Tracker, and i notice that recently Gartner has moved Hitachi into the "Other" category when comparing external storage suppliers.
But it's always fun to see how competitors spin the facts...don't you think so?
I had started a long response when someone stepped in my office and made lose all the work done. Well, may be I will be a little less angered in this mine now.
What a provocation and misinformation; stuff the old USSR politburo would be proud of.
HP using a different name, oh my oh my....never noticed that it was such since...1998 when they dumped EMC and resold HItachi? or is it EMC that dumped them? I can still find the papers of that time to check.
HDS lost Sun channel; true and it was in the air for quite sometime; as a matter of fact, the vultures were around them to get the business w/o realizing that they were actually going out of business. HDS was ready and we covered the whole revenue we used to get from SUn and more in our fiscal 2009. BTW, while we are tal;king about 2009, we were actually doing well with our poor USP V line, growing revenue beyond expectations. True, lower expectations because, in case you have not noticed, there was a big recession going on. DMX business declined much more than USP V business and you don't need the IDC tracker to know that.
Talking about DMX...teh USP has been refreshed 3 years after its announcement; three years to this date....how long has been since teh D
Technology refresh: the VSP is not just the same old with a new processor, that's EMC approach. We took a proven technology and added a new component that expedite performance, functionality and enable new capabilities. In the meantime, we were redesigning it to make it consume a lot less, making it smaller to fit in a standard rack, added SAS/SFF while you are still on the old ones....
Yes, we didn't come out with SSD till 8 months after EMC first announcement and we proved that the market was not so hot fortehm. W/o intelligence to manage them and lower their relative cost, it was not something people would jump to have them. I think it is well known what happen to EMC at the beginning of the year....oversold the idea and overbought the product. Now the VSP has that capability and we start to see interest...by mixing them with Dynamic Tiering. In teh meantime, FAST2 has come out and iot is improved...you gave it a new name; typical...new coat of paint for something that it is at least a year behind schedule (spinning as you like).
I'll stop here now, I've real work to do....
As you can see, it was filed on Dec 30, 1998 and granted on May 14, 2002.
Contrary to the misinformation about DMX that you have been delivering for so long, any similarity to any other product is purely coincidental.
And you'll have to ask Moshe why he didn't tell you about DMX...I guess it is possible that he didn't know about it before he left EMC.
The fact Moshe didn't show you DMX on the roadmap doesn't mean it did not exist at the time. Moshe was often selective with what he shared with whom.
Your own timeline proves that DMX was not based on Cereva design. In June 2002, the hardware implementation was complete and FPGAs cast in hardcopy already. There is no way that IP acquired in June 2002 could have been incorporated into DMX that late in the development cycle.
I cannot comment on the reasons for any of EMC's acquisitions, but as the marketing manager responsible for launching DMX, i can assure you that your version of history is built upon coincidence, circumstance and your imagination - but not the facts.
VPLEX support for VMware HA covers a broad range of use cases and disruptions, more than any other multi-site offering in fact. That said, a complete HA (or FT) solution requires additional integration between VPLEX Metro and VMware, work we can discuss with customers under NDA.
If you ever decide not to be100% anti-EMC any longer, let me know. I'd be happy to welcome you back and share with you more details about our current and future products.
1) Moshe Yanai briefed me on Symmetrix road map and matrix based DMX was not there. It is hard to believe that EMC needed less than two years to develop a complete new subsystem (in particular after significant developer exodus in 2001 and 2002) moving from bus-based to switch based. Porting Symmetrix software on new hardware is more logical.
2) In June 2002 EMC acquired storage startup Cereva Networks, Inc. and in February 2003 EMC announced the DMX, what a coincidence.
3) Not using Cereva design? Not letting developers to mix? Then why EMC acquired Cereva? EMC could hire the 40 developers with minimal costs. I always considered EMC as a ruthless business machine with a hawk eyes and speed for acquisitions but never as a Samaritan organization that invest 10 Millions in a falling company without any use.
4) In 2003 when an ex-Cereva developer accidently seen my Gartner slides on DMX his reaction was â it is our Cereva!â
5) I donât spread FUD. A customer asked me if I heard about other users of Vplex with firmware 4.1 not supporting recovery for all cases of errors for VMware HA Cluster. Please remember the â People Who Live In Glass Houses Should Not Throw Stonesâ- nobody in the industry is spreading more FUD than your blog.
Although I personally have explained the facts behind the DMX architecture to you on several occasions, you insist on creating your own version of history.
Fact: the engineers who designed the DMX architecture did not have access to the Cereva IP. They couldn't have - Cereva's assets were acquired by EMC *after* the DMX was in implementation. The VMAX architects likewise did not have access (nor did they need access) to the Cereva IP...both innovations were entirely developed in-house, contrary to what you think or say.
You can continue to spin your tall tales, but you do yourself a disservice by doing so.
As to auto-tiering, indeed, the size of the extents has an impact, and the optimal solutions combine both innovations in hardware and software. But no matter how you spin it, the more data you have to move as a unit increases both the time and overhead required to promote & demote across the tiers. Bigger is thus not always better, nor is delaying optimization for 24 hours (or longer).
I have no idea what your comment on HHAM is supposed to mean: HHAM was announced last year as generally available, yet there have been no evidence of it actually shipping to customers for EITHER the HA clustering of USP-V's or the non-disruptive migration use cases.
Finally, I will not directly respond to your unverifiable attempt to create FUD around VPLEX. Like I said to your old pals at IBM, such competitive tactics are not unexpected given the accellerating successes of both VMAX and VPLEX.
Barry, my question was rhetoric pointing to your âaging â remarks. I understand the 3 different hardware designs pretty well. I spent enough time with Moshe Yanai to understand the bus based Sym 1-Sym 5. The DMX is based on the Cereva Networks Inc design and the loosely coupled clustered design of the Vmax. I will present on Storage control units structures on the SNW EMEA.
The size of the âchunkletsâ in sub-LUN design have very strong influence on internal data movements overhead and the subsystem performance.
There is a big difference between the ability to deliver a function and the ability to sell it therefore your remark on the HHAM is irrelevant.
BTW, are sure that the Vplex is âsweeping the floor around the globeâ ? The Vplex installation in one of my customerâs site was delayed due to a microprogram errors.
It is somewhat amazing that you continue to attack Symmetrix even though you've not taken the time to truly understand how its architecture has evolved over the past 21 years. As its continued leadership in market share demonstrates, good designs are timeless.
But you are right on one point: the differences between the hardware of DMX-3 and DMX-4 was primarily the change to the switched loop back end and the support for 3.5" SATA drives (more than a year before either IBM or Hitachi supported SATA in their so-called enterprise arrays).
What was MORE IMPORTANT about the move from DMX3 to DMX4, however, was the fact that the SAME MICROCODE SUPPORTED BOTH. Thus, all the new features introduced with DMX4 - thin provisioning, VLUN, RAID 6, QoS, etc. - all these features were also supported on DMX3 systems. Contrast that to IBM who merely upgrade the processor and change the back-end to create the DS8800, yet they REMOVE functionality that is available on the DS8700.
As to sub-LUN FAST on VMAX - it's coming. Unlike both IBM and Hitachi who have rushed to market their wanna-be competing alternatives, we're taking the time to do it right. As you'll soon see, moving sub-LUN extents of data at 1GB (IBM) or 42MB (Hitachi) once a day is a very, very inefficient granularity for automated tiering. A robust solution must be simple to set up and manage, dynamic and adaptable to change. Supporting more than 2 tiers is also a prerequisite to optimal price/performance efficiency.
The deficiencies in implementation are understandable, however, since neither company is apparently willing or able to make the architectural changes necessary to their systems to keep up with the dynamics of the virtual data center.
Oh, and speaking of "where's that feature" - what ever happened with HHAM? More than a year after announcement, and still absolutely zero customer sightings. That little delay is allowing VPLEX to sweep the floor around the globe, owing that it is the only active/active clustering solution available!
Glass Houses, indeed!
Thanks for the comments, and welcome to the blog-o-sphere.
First, I couldn't agree with you more: an array built upon SVC's two-node cluster architecture is clearly not to be compared with enterprise-class, multi-controller arrays.
Your assertion of VPLEX's scalability is curious, as a single VPLEX cluster can scale from 2 to 8 nodes (unlike SVC which can only add 2-node clusters in a loose federation). With VPLEX, each additional node adds to the global memory, increases front-end and back-end connectivity, and scales performance linearly. Every VPLEX node can service I/O requests from any host, and can deliver I/O to any connected storage array - all with total cache coherency.
What VPLEX can do that NOTHING else can do is to tightly Federate two separate VPLEX clusters, each of which can have from 2-8 nodes. This "federation" is called "AccessAnywhere" and it allows the two clusters to "stretch" one or more LUNs across the two clusters. These LUNs are presented to hosts connected to either (or both) clusters fully active/active and with total cache coherency. Reads at either cluster will always access the latest written data, no matter where written.
The closest SVC (or NetApp) can come to this VPLEX functionality is to split their two-node cluster into two physically separate halves. In this configuration, neither of the halves is HA, and neither have any protection against failures other than their (distant) other half. With VPLEX, both clusters remain fully HA and can tolerate virtually any failure without interruption of I/O or loss of protected status (again, since every node in a VPLEX cluster can service any I/O).
I'll be talking about more differences in future posts - this was just a quick one to put a stick in the ground around which we can have some intersting conversations.
Finally, indeed EMC has many advantages and strengths. That's why we have market share leadership, and why we have a target tatooed on our backs (as the next commenter demonstrates).
What a collection of FUDs and inaccuracies. It would take me too much time to refute most of your statements or assumption.
Talking about aged subsystems how many years EMC âstretchedâ the original Symmetrix or the DMX? What are the differences between DMX-3 and DMX-4?
I told you once already that â People Who Live In Glass Houses Should Not Throw Stonesâ.
BTW, is FAST II supported on state-of-the-art VMAX?
I can agree with many of your comments until getting down to the V7000/SVC comparison.
I don't really think it's fair to compare the SVC to the VMAX, they're in a different class of competition. Now compare it to the VPLEX, where it has a hard time scaling beyond 2 nodes in geographically dispersed areas, or it's lack of write cache. (And Invista is dead now, right?)
The V7000, while incorporating SVC-like features may take it farther. Time will tell. It least auto tiering is included free of charge.
Not that I don't think EMC has some real answers and leadership. Tiering beyond 2 tiers is nice, thanks EMC.
I think the earlier opinions prior to SVC/V7000 may have hurt, but been better placed, but I'd rather see you hype up your strengths. You do have some nice ones.
Glad to see in your response that you admit the possibility you got SOMETHING wrong.
Well there's a shocker - you come out of hiding to post about HDS and IBM again - nothing about EMC... apart from the obviously aged and maybe its dead CX platform? Got problems there? Where's the SAS drives, wheres the 2.5" SFF drives? 4 Gbit FC-AL how quaint !!!!
Ooops, hang on a minute, looks like I switched surname for a second!
No issues with backend drive here thanks, just reduce test load at initial GA to ensure day1 config is 100% stable.
And I think most people will find that the 240 drive system will happily out-perform a CX-4 even 960... how many actually have 960 drives, and how many of those drives can you actually saturate - even half of them? ??
There's nothing to correct. The DS8K market share losses to VMAX are even more significant today than they were when I wrote this. And in fact, the DS8700 is effectively end-of-lifed by the introduction of its replacement.
Maybe I got the WHY wrong, but the WHAT looks pretty much as I projected. Just because wiser heads prevailed and didn't try to position yet ANOTHER abomination into the "enterprise class" doesn't mean I was wrong. IBM did in fact announce a not-quite-yet-ready new SVC-based mid-range storage array (full features not available until March 31, 2011). And for that matter, they announced the DS8700-killer perhaps even more prematurely, judging by the list of things that didn't make today's announcement:
The following functions are currently not available on DS8800:
â¢ Quick initialization and thin provisioning support
â¢ Remote Pair FlashCopy support
â¢ Easy Tier support
â¢ Multiple Global Mirror session support
â¢ z/HPF extended distance capability support
â¢ z/OS distributed data backup support
â¢ IBM Disk full page protection support
â¢ 16 TB LUN size is not available
And before you swing the FUD-axe, I'll note that TonyP's promise to deliver 8Gb FC for the DS8700 (more than 2 years later than 8Gb was available on VMAX) is hardly investing in the future of this oh-so-obviously dead platform.
But what stood out the most about today's announcements?
Not a mention of XIV, other than to note that the SVC folks lifted the look-and-feel of its GUI for 6.1. Sounds like Moshe left town for the same reason as Randy Moss : he didn't feel appreciated!
Given that EMC often place pressure on bloggers to correct what EMC views as inaccurate or misleading blogs, in the light of IBM's Oct 7 announcements, do you feel any compunction to correct anything you said in this blog post?
I guess by putting some servers together, add a back end plus some kind of OS and application so call code on top is not a right way to go. It make the machine so hard to recover when there is a problem.
Support? now a day no many customer run on same box for more then 5 to 7 years anyway.
Everyone knows what they are actually doing. But my explanation is more fun!
If that's your honest opinion of what IBM is planning in the mid - high end space, then your competitive research team either isn't doing their job, or you're not listening to them. I bet the IBM guys are all biting their tongues .. HARD.
You're also falling into the "point one finger, find three more pointing back" trap. Why be worried whether the array you purchase gets updated or replaced with a new model soon ? If it meets the projected performance and functionality requirements at the right price, then buy it and be happy. Even if IBM was to EOA the entire DS range tomorrow, I would expect IBM to give every customer excellent support for the lifetime of that equipment. To sugggest otherwise makes me wonder about EMC customers experiences with its refresh program. Did everyone who purchased a DMX4 in early 2009 make a bad decision ?
Yikes, this blog entry is like FUD on Viagra; have you considered a career in politics?
You guys at IBM crack me up..."most successful in the market" indeed. IDC StorageTracker reports Symmetrix revenues more than twice those of the DS8000 family, so I'm not sure what you are comparing youselves to.
Maybe you meant that it's just the most successful storage product made by IBM...that's no big deal when IBM-built storage products are 3rd at best in every market.
Always good entertainment reading those anarchical lines and ideas. Why should we withdraw a product that we've brought out just last October and which is most successful in the market. Happy to hear that you like our Easy Tier in particular, as you rightfully mention it so often.
-- prk (IBM ATS)
Thank you very much for your comments.
While I agree that the early innovators indeed lead the way, the vendors I am referring too here are all, in point of fact, implementing the new standards. What I shake my head at is the fact that none of them chose to include mention or reference to the standards, implying by omission that they had done something unique in their space reclamation implementations.
As to the "hack" of writing zeros and then reclaiming them, i'll gladly bestow kudos on those that built hardware optimized for this purpose. But they are still hacks, because writing TBs of zeros places a huge burden on not only the storage, but on the storage networks as well. I've seen VMware clusters totally saturate an FC fabric writing zeros; implementing WRITE_SAME not only removes the need for search-for-zeros but places an infinitesimal overhead on the SAN and storage to boot.
Would that we vendors could work as hard on getting standards approved and implemented as we do trying to differentiate. Especially when a cooperation between multiple different components of the IT infrastructure is required for an optimal solution. I guess if you have an ASIC that can scrub zeros quickly there's not much incentive to push for standards that deprecate the value of your custom hardware.
Truth be told, work on these standards was started back in 2007 before EMC brought STEC's Flash Drives forward for the industry. It is a shame that none of the early-implementers of thin provisioning were pushing for these reclamation standards before then.
I was just saying on another blog how I hoped thin reclamation would become a standard API built into the operating systems someday. I appreciate the article.
Barry of interest is your claim to "seen practically every vendor who is shipping support for this feature today practically claiming to have invented it"... followed by your statement; "Writing zeros over unused space and then reclaiming the zeros is a hack that the new APIs eliminate".
You are practically implying that vendors are deploying published co-developed RFC technology and claiming it as their own, when technically they have had to develop a separate unique interim solution (hacks) make the process work for now. I think you have a wrinkle in your space-time continuum. As you clearly stated, the published RFC technology requires the host operating systems to use the new SCSI commands, which as of today, they do not... hence the need, as you pointed out, for vendors to develop (invent) their own processes for zeroing out data and unallocating it. Everybody is doing it, in their own proprietary developed way.
I don't understand your position to wag fingers at vendors for developing thin provisioning technologies and taking credit for it. While I deeply appreciate the effort to standardize these functions, credit remains due to the early adopters from the "me-too" bandwagon-eers that benefit.
Thanks for the feedback.
As you probably know, the SBC-3 RFCs have only just recently stabilized sufficiently for implementation. And indeed, VMAX support is included in an as-yet-not-shipping software update due later this year.
The flags are actually already implemented in VMAX, so existing LUNs will already be tagged. For VP devices, zero page reclaim will find and release all-zero extents. And the upcoming Enginuity release includes updates to Open Replicator that will avoid copying all-zero blocks (PPME and Open Migrator already do this today).
And indeed, I suspect others will take similar approaches to optimize their support for space reclamation - I explain the Symm's approach here for your interest, not as some sort of differentiated uniqueness.
I will look into what I can detail about the two different standards, possibly for a future post.
Good to hear you talk about something technical and interesting again.
Couple of quick comments -
1. This really good post was let down (as usual) by your unnecessary digs at other vendors. I think your story would have gone down better with me if it weren't littered with attacks on the competition. Makes readers think your story needs padding out to have substance.
2. Every vendor Ive spoken to about it these technologies has listed the other vendors that support it too, and none of them have tried to convince me that they invented it, or that they alone can make free space from deleted files. However, you (Symm) are not currently on their list of vendors that currently do support it. That's the truth though right? If so, a far cry from the picture your trying to paint. Again, detracts from the crux of your message (IMHO).
3. I assume the NWBH flag can only be applied to newly created LUNs and, for example, not LUNs migrated in from other arrays etc... Also I imagine that LUNs created with prior versions of Eginuity wont qualify either. Is this correct?
Still, despite your childish antics, there is some great content in this post. I think the NWBH and SBZ flags sound cool. But I have to ask - how long before the other vendors implement similar features?
BTW I think you owe us a technical post on the differences between WRITE SAME with UNMAP flag set, and UNMAP (). Im cool with TRIM, but I think a lot of folks (myself included) are a little grey around WRITE SAME() and UNMAP()