Wednesday, February 13, 2019

3 Things to Consider Before Going All-In on Public Cloud - Dell EMC Certifications


How much of your business is run in the public cloud?


You’ll probably say somewhere between none and all. Only a tiny fraction of businesses run either all on-premises or all in the public cloud. For the most part, IT departments just aren’t at the extremes. They run a hybrid infrastructure – a mix of on-premises and public cloud compute. But, why?

Some businesses run a hybrid infrastructure because they choose to.

They don’t listen to hype like “the data center is dead.” They rationally looked at their environment, their industry, their business, their objectives, and their workloads to decide what’s best for them. They discovered that a hybrid IT model provides them with the most of many things. Things like flexibility. The most control over things like security and costs. If you were to ask, they’d tell you they run a hybrid infrastructure because certain workloads are more suited for running on-premises, while others are better suited for the public cloud.

Some businesses run a hybrid infrastructure because they have to.

They are on a journey to move everything to the public cloud. But not everything is movable, yet. So they are stuck. The number of applications running on-premises is dwindling, but they still have to keep the data center operational. These businesses spend a big chunk of their effort trying to abandon their on-premises infrastructure.

Why are some businesses planning to move everything to the public cloud?


For some, the public cloud’s sweet siren songs about things like unlimited capacity and scalability are irresistible. Others see the public cloud as a cure for what ails their on-premises data center. It’s important to remember that an all-in public cloud approach doesn’t immunize you from all the problems plaguing your on-premises environment.

Changing people, process, and technology in current on-premises environments is hard. It can be really hard with outdated IT equipment and no upper-management support for investing in the data center. So, they opt for a clean slate to draw their future.

Regardless of the reason, if this is you, or if it’s your company’s strategy, here are 3 things you really say when you move everything to the public cloud.

“We don’t need a workload placement strategy.”

Public clouds have amazing capabilities and you should take advantage of them. However, with an all-in public cloud approach, you are guaranteeing that no other environment is better for a particular workload. Governance, risk management, and compliance analysis may suggest that running an application on-premises is optimal. Maybe service quality can be improved with infrastructure closer to home. With an all-in public cloud approach, you don’t get to make those assessments. You don’t need a workload placement strategy because nothing is run on-premises.

“It’s actually better to put all our eggs in one basket, because we get more out of it.”

It’s true that it’s much easier to learn one technology and deal with one vendor. You’re able to go deep and take advantage of a lot of features. But, it’s rarely advised to give anything or anyone too much power or control. No one wants to be vulnerable to a supplier driving up prices. The tech industry has largely moved away from proprietary technology as customers try to avoid lock-in. With an all-in public cloud approach, you put all your eggs in one basket. To get the most out of a move to the public cloud, you select a primary public cloud vendor and use as many of their key services as possible. These key services are proprietary and the cost to switch to another public cloud or back on premises is high.

“Deploying compute or having data repositories at the edge won’t be necessary.”

The use cases for the edge are just now coming into focus for business. IoT and 5G are driving much of this. Some experts predict that edge intelligence will be so massive that it will usher in the end of cloud computing. Who knows? What’s clear is that the edge is still a little fuzzy for businesses to predict exactly how it will affect them. Edge deployments of compute and data repositories will make the most sense in many cases. For instance, sending huge amounts of data to the cloud for processing may take too long. Instead, data is collected and analyzed locally, in real-time, and only the resultant data is sent to the cloud. With an all-in public cloud approach, all use cases at the edge aggregate data and move it to the cloud for processing and analysis.

The businesses who choose to run a hybrid infrastructure think of their data center as an investment, not a cost. They maximize their placement options for each workload without sacrificing flexibility, agility, or scalability. In many cases, they actually gain from running the workload on-premises compared to the public cloud.

Maybe it’s time to rethink your journey to 100% public cloud. Maybe now is the time to change course from feeling burdened by a hybrid infrastructure model to being empowered by it. IDC considers the hybrid approach to be the most pragmatic and effective in the short and long terms.

Secret To Pass Dell EMC Certification Exams In First Attempt



Tuesday, January 29, 2019

Dell EMC Client Heroes FY19 Program Ends with a Desert Sunset - Dell EMC Certifications


The Dell EMC Client Heroes Program continued in Q4 with our final FY19 events taking place in Bogota, Colombia and Dubai, UAE respectively.  These attractive destinations were selected on the basis of their ease of access for many partners across these regions, and the concentration of expert presenters and facilities available there.

Leveraging the same format as typical Heroes Exchange events, our goal is to bring up-to-date and relevant information about Dell EMC products, solutions and services directly to our Hero partner community. Local Subject Matter Experts (SMEs) support with interactive and engaging sessions, a style that benefits presenters and partners alike, by enabling dialogue and information-sharing in a face to face environment.

Our event in Bogota took place in early December, with approximately 40 key LATAM partners joining for a fantastic one-day session with a mix of learning, networking and casino activity.

Destination Dubai was a great setting for a large-scale event combining our typical Heroes program and the EMEA workstation academy, a two-day format which enabled more of a deep dive into content and demonstrations on the latter. The new Dell EMC solution center in Dubai was the perfect choice for our inaugural event here, bringing 80 key partners and distributors from across EMEA to this unique location. In addition to the valuable learning provided during the sessions, attendees had the opportunity to experience a desert safari and sunset, adding to a very memorable experience indeed.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Wednesday, January 16, 2019

FPGAs vs. GPUs: A Tale of Two Accelerators - Dell EMC Certifications


In deep learning applications, FPGA accelerators offer unique advantages for certain use cases.

In artificial intelligence applications, including machine learning and deep learning, speed is everything.

Whether you’re talking about autonomous driving, real-time stock trading or online searches, faster results equate to better results.

This need for speed has led to a growing debate on the best accelerators for use in AI applications. In many cases, this debate comes down to a question of server FPGAs vs. GPUs — or field programmable gate arrays vs. graphics processing units.

To see signs of this lively debate, you need to look no further than the headlines in the tech industry. A few examples that pop up in searches:

  • “Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?”
  • “FPGA vs GPU for Machine Learning Applications: Which One Is Better?”
  • “FPGAs Challenge GPUs as a Platform for Deep Learning”

So what is this lively debate all about? Let’s start at the beginning. Physically, FPGAs and GPUs often plug into a server PCIe slot. Some, like the NVIDIA® Volta Tesla V100 SXM2, are mounted onto the server motherboard. Note that GPUs and FPGAs do not function on their own without a server, and neither FPGAs nor GPUs replace a server’s CPU(s). They are accelerators, adding a boost to the CPU server engine. At the same time, CPUs continue to get more powerful and capable, with integrated graphics processing. So start the engines and the race is on between servers that have been chipped, turbo and supercharged.

FPGAs can be programmed after manufacturing, even after the hardware is already in the field — which is where the “field programmable” comes from in the field programmable gate array (FPGA) name. FPGAs are often deployed alongside general-purpose CPUs to accelerate throughput for targeted functions in compute- and data-intensive workloads. They allow developers to offload repetitive processing functions in workloads to rev up application performance.

GPUs are designed for the types of computations used to render lightning-fast graphics — which is where the “graphics” comes from in the graphics processing unit (GPU) name. The Mythbusters demo of GPU versus CPU is still one of my favorites and it’s fun that the drive for video game screen-to-controller responsiveness impacted the entire IT industry, as accelerators have been adopted for a wide range of other applications ranging from AutoCAD and virtual reality to crypto-currency mining and scientific visualization.

FPGA and GPU makers continuously compare against CPUs, sometimes making it sound like they can take the place of CPUs. The turbo kit still cannot replace the engine of the car — at least not yet. However, they want to make the case that the boost makes all the difference. They want to prove that the acceleration is really cool. And it is, depending on how fast you want or need your applications to go. And just like with cars, it comes at a price. After the acquisition cost, the price includes the amount of heat generated (accelerators run hotter), fuel required (they need more power), and sometimes applications aren’t programmed to take full advantage of the available acceleration (GPU applications catalog).



So which is better for AI workloads like deep learning inferencing? The answer is: It depends on the use case and the benefits you are targeting. The ample commentary on the topic finds cases where FPGAs have a clear edge and cases where GPUs are the best route forward.

Dell EMC distinguished engineer Bhavesh Patel addresses some of these questions in a tech note exploring reasons to use FPGAs alongside CPUs in the inferencing systems used in deep learning applications. A bit of background: When a deep learning neural network has been trained to know what to look for in datasets, the inferencing system can make predictions based on new data. Inferencing is all around us in the online world. For example, inferencing is used in recommendation engines — you choose one product and the system suggests others that you’re likely to be interested in.

In his tech note, Bhavesh explains that FPGAs offer some distinct advantages when it comes to inferencing systems. These advantages include flexibility, latency and power efficiency. Let’s look at some of the points Bhavesh makes:

Flexibility for fine tuning


FPGAs provide flexibility for AI system architects looking for competitive deep learning accelerators that also support customization. The ability to tune the underlying hardware architecture and use software-defined processing allows FPGA-based platforms to deploy state-of-the-art deep learning innovations as they emerge.

Low latency for mission-critical applications


FPGAs offer unique advantages for mission-critical applications that require very low-latency, such as autonomous vehicles and manufacturing operations. The data flow pattern in these applications may be in streaming form, requiring pipelined-oriented processing. FPGAs are excellent for these kinds of use cases, given their support for fine-grained, bit-level operations in comparison to GPUs and CPUs.

Power savings


Power efficiency can be another key advantage of FPGAs in inferencing systems. Bhavesh notes that since the logic in FPGAs has been tailored for specific applications and workloads, the logic is extremely efficient at executing the application. This can lead to lower power usage and increased performance per watt. By comparison, CPUs may need to execute thousands of instructions to perform the same function that an FPGA maybe able to implement in just a few cycles.

All of this, of course, is part of a much larger discussion on the relative merits of FPGAs and GPUs in deep learning applications — just like with turbo kits vs. superchargers. For now, let’s keep this point in mind: When you hear someone say that deep learning applications require accelerators, it’s important to take a closer look at the use case(s). I like to think about it as if I’m chipping, turbo or super-charging my truck. Is it worth it for a 10-minute commute without a good stretch of highway? Would I have to use premium fuel or get a hood scoop? Might be worth it to win the competitive race, or for that muscle car sound.

Our experts say about Dell EMC Certification Exams



Thursday, January 3, 2019

What do Data Domain customers have to say about Data Domain - Dell EMC Certifications


Better together may be a cliché, but there is no better term for the pairing of a Data Domain appliance and the customer who owns it. The Data Domain appliances portfolio, both physical and virtual, have helped many businesses modernize their IT environments while providing best of breed deduplication and protection storage. Below are a few stories that are a testament to the Data Domain portfolio and all it has to offer.

Phoenix Children’s Hospital


When in the hospital, one of the most important variables for success is time. Before Phoenix Children’s Hospital had implemented protection storage from Dell EMC, their prior solution resulted in backup windows that were upwards of 24 hours. “Moving to Data Domain with deduplication we can finish our full backups in less than 7 hours” – Theodore Fotias – VP IT Infrastructure – Phoenix Children’s Hospital

Watch and learn how Data Domain helped Phoenix Children’s Hospital reduce backup windows and more importantly, ensure that they provide top notch care to the children that stay at the hospital.

“Everything we do is about and for the kids. It’s crucial to make sure that all critical data that a doctor or a nurse needs to be able to take care of a sick child, is protected and available at any time.” – Theodore Fotias

Founders Federal Credit Union


Founders Federal is a regional financial institution that started in 1950 and has now grown to over 30 locations in North and South Carolina. While working with Dell EMC Engineers, Founders Federal became aware of some gaps in their data protection strategy. “Today’s benefit of using Dell EMC Data Protection Software combined with Data Domain is the ability to deliver a deduplication rate of 72:1.” Bob Bender – CTO – Founders Federal Credit Union

FieldCore, a GE Company


With Hurricane Irma bearing down on the state of Florida, FieldCore was concerned about their current backup strategy. FieldCore switched to a Data Domain centric backup strategy and was up and running before Hurricane Irma made landfall in Florida.

The Dell EMC solution was implemented in less than four days. “We went from a 4:1 deduplication ratio to a 41:1 deduplication ratio with Data Domain. We saw backups go from taking upwards of 24 hours to complete to 98% of our backups completing in less than one hour. This allowed us to replicate our data before the storm actually hit.” – Kerry Johnson – Senior Systems Engineer – FieldCore

The Data Domain portfolio of backup appliances and Data Protection Software has helped countless businesses achieve their backup nirvana. If you are not familiar with Data Domain, ask your Dell EMC Sales Representative for more information. You can also learn more about Data Domain on our website. If you would like to stay up to date with Data Domain announcements.

Our experts say about Dell EMC Certification Exams



Monday, December 17, 2018

Customers and Partners Help Dell EMC Storage Finish Strong in 2018- Dell EMC Certification


Today IDC published its latest 2018 Worldwide Enterprise Storage Systems Tracker – a pivot table that provides market size and vendor share for hundreds of worldwide technology markets.

As we talked about in our recent FYQ3 earnings, Dell EMC had a strong quarter in storage. IDC reported a 31.3% revenue share in external storage for Dell, Inc. – representing +23% year-over-year revenue growth and +2.6 points of y/y share gain. We believe our strong results in the tracker validates our track record of consistently delivering meaningful technology innovation to customers. In doing so, we can report with appreciation and pride that our storage business has achieved three quarters of consecutive share gain in the external storage systems product category.

Dell EMC Storage systems win THREE CRN Product Awards


Each year CRN readers and editorial staff recognize the industry’s most innovative products in a variety of hardware and software categories. CRN also recognizes one vendor for overall excellence in technology innovation with its Editors’ Choice award. Winners are determined based on many factors including the CRN editorial board and a survey that’s sent to >4,000 solution providers who respond with direct satisfaction feedback among partners and their customers.

We are pleased to announce that Dell EMC PowerMax has been named the overall winner of the 2018 CRN Product of the Year Award for storage and the 2018 CRN Technology Innovator Award for Enterprise Storage. But there’s more! CRN also named the Dell EMC Unity 650F All-Flash Array in its 2018 Product of the Year awards as the Midrange Storage subcategory winner for Technology.

These product awards and overall recognition are testament to the best-of-breed technical innovation, feature/functionality, reliability and quality Dell EMC continues to deliver across our entire primary storage portfolio and affirms our commitment to invest in innovation for the future.

SC5020 wins second Editor’s Choice in four months


Again, from our Midrange portfolio, the SC5020 secures yet another accolade for Dell EMC Storage with the 2018 Editor’s Choice from Storage Review. It’s the second year in a row the publication has selected an SC array for one of the coveted Editor’s Choice honors (SC9000 won in 2017). Plus, just this summer, the same SC5020 model was also the 2018 IT Pro Editor’s Choice. Industry acclaim for the SC Series is on the upswing! I encourage you to take a look at this detailed new Storage Review article that includes 30 charts and graphics highlighting SC5020’s phenomenal performance – as well as its “sleek” management capabilities running the latest 7.3 level firmware.

Momentum for Dell EMC Isilon and ECS


For the third straight year, Dell EMC was recognized by Gartner as a leader in the 2018 Magic Quadrant for Distributed File Systems and Object Storage with its Dell EMC Isilon and Dell EMC ECS platforms. The report evaluates Distributed File and Object Storage vendors that help enterprises manage the rapid growth in unstructured data.

To help organizations quickly ramp their AI efforts, we made some exciting announcements this year. For customers looking to leverage a pre-validated hardware and software stack for their Deep Learning initiatives, we launched the Dell EMC Ready Solutions for AI: Deep Learning with NVIDIA in August, which features Isilon All-Flash storage with Dell EMC PowerEdge Servers and Dell EMC Networking. More recently, we expanded our collaboration with NVIDIA by announcing a new reference architecture for AI featuring NVIDIA DGX-1 servers complemented with the high-performance of Isilon All-Flash storage. With a true scale-out architecture that enables high performance at PB-scale datasets, Isilon is the perfect storage platform for AI/Deep Learning initiatives.

Executing on our portfolio strategy


Lastly, you’ve heard us say throughout the year that we need to deliver a streamlined storage portfolio that 1) is simple to understand – one product for each market segment and 2) delivers clearly differentiated products within their intended market segment – low-end to midrange to enterprise to the unstructured file and object storage segments. As Jeff Clarke Vice Chairman Products and Operations has stated, “An effective product development discipline and methodology and structure is when you streamline things for speed and velocity.” And that’s exactly what we’ve been focused on in 2018 and will continue through the next year.

In 2018, we’ve successfully executed on significant portions of our portfolio simplification strategy to bring you Dell EMC PowerMax for your high-end enterprise storage needs and Dell EMC PowerVault for entry-level price-sensitive businesses. Even though these products have only been available for a relatively brief time, they are already demonstrating immense value for customers around the globe.

PowerMax is off to a great start and is performing ahead of our expectations. We are winning for all the reasons that we planned: 1) NVMe done right proving that architecture matters when trying to deliver the best performance with predictability and data services; 2) PowerMax is surprising customers with its simplicity and access at lower price-points.

In addition to PowerMax and PowerVault being off to strong starts, CloudIQ, our free cloud-based storage analytics and monitoring application, is becoming the umbrella over all our platforms. It has been affirming to myself and my team that feedback on our strategy is resonating well with customers, partners and analysts.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Tuesday, December 11, 2018

Data Domain Cloud Tier - Dell EMC Certifications


The Data Domain Family has been the number one choice when it comes to the Purpose Built Backup Appliance market. Data Domain provides best in class deduplication and scale for your data protection needs, including the ability to tier data to the cloud of your choice via Data Domain Cloud Tier. With Data Domain Cloud Tier you can send your data directly from the DD appliance to any of the validated and supported cloud object storage providers; public, private or hybrid for long-term retention needs.

Data Domain Cloud Tier (DD Cloud Tier) is a function of the Data Domain appliance itself. With DD Cloud Tier, Data Domain can natively tier deduplicated data to the cloud. No separate cloud gateway or virtual appliance is required. Data is sent directly from the Data Domain system to the cloud with seamless management through Data Domain Management Center where the data movement policy is managed.

Data Domain Cloud Tier can scale up to 2x the max capacity of the active tier, increasing the overall Data Domain system scalabilty by up to 3x. For example, the DD9800 scales up to 1PB of usable capacity on the active tier, therefore the cloud tier can support up to 2PB of usable capacity.  Factoring in Data Domain’s deduplication ratios, this results in up to 100PB of logical capacity being efficiently protected in the cloud and overall 150PB of logical capacity being managed by a single Data Domain system.

When it comes to offloading data into the cloud there are a ton of questions that come up with regards to security. Don’t worry, DD Cloud Tier supports DD Encryption to ensure data being sent to the cloud remains secure. Customers have the option to leverage DD Encryption to encrypt all data on the Data Domain system or encrypt just the data being stored in the cloud tier.

To help manage your data both on-prem and in the cloud Data Domain Management Center, DD MC, can provide key insights into your backup environment. DDMC provides health and capacity reporting as well as a full system view of your DD Cloud Tier. DDMC also gives you visibility into how your data is distributed over multiple clouds; available space, total capacity per provider and on-prem.

Data Domain Cloud Tier is a great complement to your Data Domain appliance. Data Domain Cloud Tier extends deduplication to the cloud of your choice while freeing up space on your local appliance. To learn more about Data Domain Cloud Tier check out our video. To see Data Domain Cloud Tier in action here is a quick demo. If you have further questions about what Data Domain Cloud Tier can do for you please speak with your sales representative.

Our experts say about Dell EMC Certification Exams



Tuesday, December 4, 2018

Dell EMC Advances Hybrid Cloud and Modern Data Center Operations for VMware Environments


STORY HIGHLIGHTS


  • Dell EMC VxRail enhanced as an integrated cloud platform with support for VMware Cloud Foundation and fully automated network configuration with Dell EMC Networking SmartFabric Services
  • Dell EMC VxBlock 1000 joins the Dell EMC Cloud Marketplace with new automation software and integration with VMware vRealize Suite, revolutionizing converged infrastructure (CI) operations by enabling administrators to expand resources in minutes versus hours
  • New 25GbE top-of-rack switches added to Dell EMC’s open networking offerings to help customers meet increasing network demands and migrate to a software-defined data center
  • Expanded range of VMware-based cloud platform and cloud consumption options now available within Dell EMC Cloud Marketplace



Dell EMC, the number-one provider of cloud infrastructurei , announces key portfolio enhancements and integrations with VMware designed to help customers further automate operations of their modern data center and hybrid cloud environments. These new capabilities allow businesses to accelerate innovation, simplify operations and speed overall IT Transformation initiatives.

Dell EMC VxRail simplifies networking, management and access to latest VMware innovations 


Organizations globally are choosing highly automated, scale-out VxRail HCI appliances, powered by VMware vSAN to support their digital transformation objectives and need to scale operations rapidly and efficiently. Applications and data are increasingly distributed across edge locations, core data centers, and hybrid cloud environments. Customers turn to Dell Technologies as a trusted partner to provide a seamless, simplified experience through a jointly engineered digital foundation between Dell EMC VxRail and VMware. Through this collaboration and co-engineering, VxRail is becoming even simpler to adopt, deploy and manage. 
New advances to Dell EMC VxRail include:


  • Simpler networking deployments with the first HCI appliances to integrate fully automated network awareness and configuration during set up, cluster expansion, and day-to-day management with Dell EMC SmartFabric Servicesii . To deploy VxRail at any scale more rapidly, SmartFabric Services, as part of Dell EMC Networking OS10 Enterprise Edition network operating system, automates up to 98% of the network configuration steps for VxRail hyper-converged environments through integration with VxRail Manager and VMware vSphere. SmartFabric Services also enable customers to quickly deploy and automate data center networking fabrics while being fully interoperable with existing data center infrastructure.
  • More automation for the entire VMware cloud stack, along with networking, to more quickly deploy and manage hybrid cloud environments with VxRail clusters. The only VMware jointly engineered HCI appliance with VMware Cloud Foundation, coming soon, VxRail offers an integrated cloud platform that delivers an even simpler path to the VMware SDDC and hybrid cloud strategy that is future-proofed for next generation VMware Cloud technologies. It allows extensibility to public cloud providers, such as VMware Cloud on AWS, and hybrid cloud container services such as Pivotal.
  • Transparent systems management with all VxRail tasks available to be managed directly from the familiar VMware vCenter Server console, making it even easier to move to and manage VxRail from the primary management platform for VMware environments.
  • Greater flexibility by supporting a two-node VxRail cluster, instead of the previously required three. This makes VxRail more accessible at the edge for larger organizations, such as retailers with limited requirements at remote locations. Additionally, new, flexible vSAN licensing further enables customers to choose their desired level of HCI software functionality and investment.
  • Tighter integration with next-generation VMware Cloud technology with VxRail now available on the latest vSAN release (version 6.7U1), support for VMware Validated Design for SDDC 4.3 and planned Project Dimension integration for data center, edge, and hybrid-cloud use cases. Project Dimension will combine VMware’s compute, storage and networking solutions with VxRail, managed as a Service by VMware. Additionally, customers can now also use VxRail with VMware Site Recovery for push-button failover to VMware Cloud on AWS for disaster recovery.

New Dell EMC VxBlock Central simplifies CI operations with enhanced awareness, automation and analytics 


Enterprises worldwide modernize their data centers and run their VMware-based clouds using Dell EMC VxBlock Systems—turnkey CI systems that bring together compute, storage, networking and VMware vSphere virtualization. Continuing this long history of CI innovation, demonstrated with the introduction of the VxBlock System 1000, new Dell EMC VxBlock Central software provides converged awareness, automation and analytics to simplify daily CI administration. VxBlock Central includes a single unified user interface for accessing VxBlock System information in real time. It includes an integrated launch point to VMware vRealize Orchestrator for automating daily operational tasks and a launch point to vRealize Operations to provide detailed analytics and an easy way to manage VxBlock storage capacity