- Getting a Move On: Do People with Smartwatches Exercise More?
- Exploring The Frontiers of Cloud Native
- Why Smaller SSDs Are Slower Than Larger SSDs
- The Best Wi-Fi 6E Routers of 2022
- Is a 240Hz Monitor Worth It?
- DisplayPort vs. HDMI: Which Is Better?
- Laptop Fuel Cells Were the Next Big Thing: What Happened?
- Do You Need More Than One Graphics Card for Gaming?
- Our Green Future: Why a Digital-First World needs Zero-Carbon Data Centers
- The Best USB Flash Drives of 2022
Exploring The Frontiers of Cloud Native
Towards “Distributed Cloud Native”
I remember in the early days of Cloud Native – there were a number of big debates over schedulers and distributed computing versus cloud native computing for managing cloud workloads.
On the one hand, there were many schedulers from the distributed computing and batch scheduling realm, such as Yarn, HTCondor, and Grid Engine, that were increasingly managing cloud workloads. On the other hand, there were the new “cloud native” schedulers such as Kubernetes, Docker Swarm, and Marathon, that claimed to be better for orchestrating cloud-native technologies like containers.
Of course, we know that over time, Cloud Native scheduling — and in particular, Kubernetes — became the dominant approach for scheduling cloud workloads. However, just because Kubernetes and Cloud Native “won,” that doesn’t mean that distributed computing requirements went away. It just fragmented the market so that many distributed computing workloads could not run as cloud native workloads because they depended on features that Kubernetes couldn’t readily address such as complex scheduling, extreme scale, and batch processing.
For many years, this was not a major issue. However, the relatively recent rise of large-scale, intensive workloads, such as machine learning in the cloud, requires a coming together of the previously distinct realms of distributed processing and cloud-native processing.
At Huawei, we have been working on this problem for a number of years now, all with the aim towards providing a merged set of capabilities we call Distributed Cloud Native. Distributed Cloud Native brings the advanced scheduling capabilities from distributed computing into the realm of Kubernetes and cloud native computing.
As a result, we have been able to push cloud native computing into new frontiers, spanning physically extreme industries to extreme scale to multiple clouds to new communities.
New boundaries: Physically extreme industries to Cloud Native
We contributed the open source project, KubeEdge, to the Cloud Native Computing Foundation (CNCF). KubeEdge has since grown into a tremendously successful project, enabling Kubernetes to deploy into many physically extreme environments at the edge, away from cloud data centers.
For example, with KubeEdge and Kubernetes, we have:
- Deployed Kubernetes in space satellites, managing the open source MindSpore AI framework to perform orbit-earth coordinated image inference, incremental deep learning, and federated learning in space.
- Brought distributed cloud native scheduling into offshore oilfields. Oil drilling requires tremendous distributed data analysis to find the right drilling sites in extreme environments, like oceans, that are far from cloud data centers and reliable network connectivity.
- Created cloud-native Internet of Vehicle (IoV) deployments by deploying Kubernetes in automobiles. These vehicles have to deal with unreliable connections and constantly changing environments at high speeds while communicating with each other as well as managing containerized services within the cars, such as smart-cabin or autonomous driving features. Today, more than 200,000 vehicles a year ship with Kubernetes and KubeEdge inside of them. And, we are able to manage clusters of 100,000 vehicles at a time.
Distributed Cloud Native Computing at extreme edge environments with Kubernetes and KubeEdge has been one of the major developments that Huawei has contributed to open source.
New Breadth: Scalability breakthroughs for Cloud Native
Data-intensive workloads such as machine learning require bringing distributed computing features, like advanced scheduling, extreme scale, and batch processing, into cloud-native computing. To facilitate this, Huawei has contributed the Volcano open source project to CNCF.
Volcano provides Kubernetes with many advanced distributed computing features. As a result, Kubernetes with Volcano can handle both cloud-native workloads and AI or big data workloads at large scale. For example, with Volcano, we have enabled Kubernetes to scale to 1 million pods, increased throughput by 1,000%, and improved utilization by 60%. These massive improvements enable Kubernetes to scale to the needs of large distributed computing workloads such as machine learning.
Ubiquitous: 1st CNCF project for multi-cloud Kubernetes
Most enterprise companies have a strategy to become multi-cloud, and Kubernetes is typically key to this strategy because it provides a common platform for building cloud-native applications. However, Kubernetes has a limitation in that it cannot really handle scheduling applications that span more than one cloud or cluster. There are a variety of solutions in the market to address this limitation, but they are mainly single-vendor solutions that lock you into a proprietary approach.
To enable open, multi-cloud Kubernetes, Huawei has contributed Karmada to CNCF. Karmada is the first CNCF project to facilitate cross-cloud, cross-cluster Kubernetes scheduling. Because Karmada is hosted at CNCF, it is a truly open solution, not tying you to a particular vendor for multi-cloud. Furthermore, Karmada greatly simplifies becoming multi-cloud by keeping the same APIs as Kubernetes, providing unified network management, and including many advanced scheduling capabilities that are immediately ready to use.
With Karmada, Kubernetes takes a giant step forwards towards enabling multi-cloud applications in a completely open way.
New and expanding community
Perhaps even more important than pushing the technology boundaries of cloud-native computing, Huawei has been focused on expanding the frontiers of the cloud native community. For example:
- The growing end-user member community within CNCF is one of the strengths of the overall cloud-native ecosystem. However, cloud-native adoption in China lags behind many other parts of the world, and there are far fewer end-user members in CNCF from China than from other countries. To help grow the number of cloud-native end users in China, we have partnered with CNCF and CAICT to create a Cloud Native Elite Club. This provides a forum where many executives across different companies can come together to learn and share from each other best practices and benefits for moving towards cloud native. We now have more than 100 CXOs participating in the Cloud Native Elite Club.
In addition to the Cloud Native Elite Club, Huawei has helped to organize over 30 KCD days and Cloud Native Days events in China, reaching over 100,000 developers
- We have been working to grow the number of women in the CNCF community, supporting not only CNCF’s goals around diversity but also the overall health of the CNCF community and new opportunities for women in technology. We are pleased that we have been able to support 120 women to join the cloud native community.
Cloud native computing has come a long way since the early debates about schedulers and is now one of the most important technologies for building services and applications. We are excited to continue to push the boundaries of cloud native technologies and the community to see where they take us next!
- Cloud Native 2.0: The Cloud Native for Every Enterprise [HUAWEI CLOUD solutions]
- Inspiring Women: How to Reduce the Gender Gap in STEM & ICT [blog post]
Article Source: HuaWei
Article Source: HuaWei
Disclaimer: Any views and/or opinions expressed in this post by individual authors or contributors are their personal views and/or opinions and do not necessarily reflect the views and/or opinions of Huawei Technologies.