Tuesday, January 29, 2019

Dell EMC Client Heroes FY19 Program Ends with a Desert Sunset - Dell EMC Certifications


The Dell EMC Client Heroes Program continued in Q4 with our final FY19 events taking place in Bogota, Colombia and Dubai, UAE respectively.  These attractive destinations were selected on the basis of their ease of access for many partners across these regions, and the concentration of expert presenters and facilities available there.

Leveraging the same format as typical Heroes Exchange events, our goal is to bring up-to-date and relevant information about Dell EMC products, solutions and services directly to our Hero partner community. Local Subject Matter Experts (SMEs) support with interactive and engaging sessions, a style that benefits presenters and partners alike, by enabling dialogue and information-sharing in a face to face environment.

Our event in Bogota took place in early December, with approximately 40 key LATAM partners joining for a fantastic one-day session with a mix of learning, networking and casino activity.

Destination Dubai was a great setting for a large-scale event combining our typical Heroes program and the EMEA workstation academy, a two-day format which enabled more of a deep dive into content and demonstrations on the latter. The new Dell EMC solution center in Dubai was the perfect choice for our inaugural event here, bringing 80 key partners and distributors from across EMEA to this unique location. In addition to the valuable learning provided during the sessions, attendees had the opportunity to experience a desert safari and sunset, adding to a very memorable experience indeed.

Success Secrets: How you can Pass Dell EMC Certification Exams in first attempt



Wednesday, January 16, 2019

FPGAs vs. GPUs: A Tale of Two Accelerators - Dell EMC Certifications


In deep learning applications, FPGA accelerators offer unique advantages for certain use cases.

In artificial intelligence applications, including machine learning and deep learning, speed is everything.

Whether you’re talking about autonomous driving, real-time stock trading or online searches, faster results equate to better results.

This need for speed has led to a growing debate on the best accelerators for use in AI applications. In many cases, this debate comes down to a question of server FPGAs vs. GPUs — or field programmable gate arrays vs. graphics processing units.

To see signs of this lively debate, you need to look no further than the headlines in the tech industry. A few examples that pop up in searches:

  • “Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?”
  • “FPGA vs GPU for Machine Learning Applications: Which One Is Better?”
  • “FPGAs Challenge GPUs as a Platform for Deep Learning”

So what is this lively debate all about? Let’s start at the beginning. Physically, FPGAs and GPUs often plug into a server PCIe slot. Some, like the NVIDIA® Volta Tesla V100 SXM2, are mounted onto the server motherboard. Note that GPUs and FPGAs do not function on their own without a server, and neither FPGAs nor GPUs replace a server’s CPU(s). They are accelerators, adding a boost to the CPU server engine. At the same time, CPUs continue to get more powerful and capable, with integrated graphics processing. So start the engines and the race is on between servers that have been chipped, turbo and supercharged.

FPGAs can be programmed after manufacturing, even after the hardware is already in the field — which is where the “field programmable” comes from in the field programmable gate array (FPGA) name. FPGAs are often deployed alongside general-purpose CPUs to accelerate throughput for targeted functions in compute- and data-intensive workloads. They allow developers to offload repetitive processing functions in workloads to rev up application performance.

GPUs are designed for the types of computations used to render lightning-fast graphics — which is where the “graphics” comes from in the graphics processing unit (GPU) name. The Mythbusters demo of GPU versus CPU is still one of my favorites and it’s fun that the drive for video game screen-to-controller responsiveness impacted the entire IT industry, as accelerators have been adopted for a wide range of other applications ranging from AutoCAD and virtual reality to crypto-currency mining and scientific visualization.

FPGA and GPU makers continuously compare against CPUs, sometimes making it sound like they can take the place of CPUs. The turbo kit still cannot replace the engine of the car — at least not yet. However, they want to make the case that the boost makes all the difference. They want to prove that the acceleration is really cool. And it is, depending on how fast you want or need your applications to go. And just like with cars, it comes at a price. After the acquisition cost, the price includes the amount of heat generated (accelerators run hotter), fuel required (they need more power), and sometimes applications aren’t programmed to take full advantage of the available acceleration (GPU applications catalog).



So which is better for AI workloads like deep learning inferencing? The answer is: It depends on the use case and the benefits you are targeting. The ample commentary on the topic finds cases where FPGAs have a clear edge and cases where GPUs are the best route forward.

Dell EMC distinguished engineer Bhavesh Patel addresses some of these questions in a tech note exploring reasons to use FPGAs alongside CPUs in the inferencing systems used in deep learning applications. A bit of background: When a deep learning neural network has been trained to know what to look for in datasets, the inferencing system can make predictions based on new data. Inferencing is all around us in the online world. For example, inferencing is used in recommendation engines — you choose one product and the system suggests others that you’re likely to be interested in.

In his tech note, Bhavesh explains that FPGAs offer some distinct advantages when it comes to inferencing systems. These advantages include flexibility, latency and power efficiency. Let’s look at some of the points Bhavesh makes:

Flexibility for fine tuning


FPGAs provide flexibility for AI system architects looking for competitive deep learning accelerators that also support customization. The ability to tune the underlying hardware architecture and use software-defined processing allows FPGA-based platforms to deploy state-of-the-art deep learning innovations as they emerge.

Low latency for mission-critical applications


FPGAs offer unique advantages for mission-critical applications that require very low-latency, such as autonomous vehicles and manufacturing operations. The data flow pattern in these applications may be in streaming form, requiring pipelined-oriented processing. FPGAs are excellent for these kinds of use cases, given their support for fine-grained, bit-level operations in comparison to GPUs and CPUs.

Power savings


Power efficiency can be another key advantage of FPGAs in inferencing systems. Bhavesh notes that since the logic in FPGAs has been tailored for specific applications and workloads, the logic is extremely efficient at executing the application. This can lead to lower power usage and increased performance per watt. By comparison, CPUs may need to execute thousands of instructions to perform the same function that an FPGA maybe able to implement in just a few cycles.

All of this, of course, is part of a much larger discussion on the relative merits of FPGAs and GPUs in deep learning applications — just like with turbo kits vs. superchargers. For now, let’s keep this point in mind: When you hear someone say that deep learning applications require accelerators, it’s important to take a closer look at the use case(s). I like to think about it as if I’m chipping, turbo or super-charging my truck. Is it worth it for a 10-minute commute without a good stretch of highway? Would I have to use premium fuel or get a hood scoop? Might be worth it to win the competitive race, or for that muscle car sound.

Our experts say about Dell EMC Certification Exams



Thursday, January 3, 2019

What do Data Domain customers have to say about Data Domain - Dell EMC Certifications


Better together may be a cliché, but there is no better term for the pairing of a Data Domain appliance and the customer who owns it. The Data Domain appliances portfolio, both physical and virtual, have helped many businesses modernize their IT environments while providing best of breed deduplication and protection storage. Below are a few stories that are a testament to the Data Domain portfolio and all it has to offer.

Phoenix Children’s Hospital


When in the hospital, one of the most important variables for success is time. Before Phoenix Children’s Hospital had implemented protection storage from Dell EMC, their prior solution resulted in backup windows that were upwards of 24 hours. “Moving to Data Domain with deduplication we can finish our full backups in less than 7 hours” – Theodore Fotias – VP IT Infrastructure – Phoenix Children’s Hospital

Watch and learn how Data Domain helped Phoenix Children’s Hospital reduce backup windows and more importantly, ensure that they provide top notch care to the children that stay at the hospital.

“Everything we do is about and for the kids. It’s crucial to make sure that all critical data that a doctor or a nurse needs to be able to take care of a sick child, is protected and available at any time.” – Theodore Fotias

Founders Federal Credit Union


Founders Federal is a regional financial institution that started in 1950 and has now grown to over 30 locations in North and South Carolina. While working with Dell EMC Engineers, Founders Federal became aware of some gaps in their data protection strategy. “Today’s benefit of using Dell EMC Data Protection Software combined with Data Domain is the ability to deliver a deduplication rate of 72:1.” Bob Bender – CTO – Founders Federal Credit Union

FieldCore, a GE Company


With Hurricane Irma bearing down on the state of Florida, FieldCore was concerned about their current backup strategy. FieldCore switched to a Data Domain centric backup strategy and was up and running before Hurricane Irma made landfall in Florida.

The Dell EMC solution was implemented in less than four days. “We went from a 4:1 deduplication ratio to a 41:1 deduplication ratio with Data Domain. We saw backups go from taking upwards of 24 hours to complete to 98% of our backups completing in less than one hour. This allowed us to replicate our data before the storm actually hit.” – Kerry Johnson – Senior Systems Engineer – FieldCore

The Data Domain portfolio of backup appliances and Data Protection Software has helped countless businesses achieve their backup nirvana. If you are not familiar with Data Domain, ask your Dell EMC Sales Representative for more information. You can also learn more about Data Domain on our website. If you would like to stay up to date with Data Domain announcements.

Our experts say about Dell EMC Certification Exams