| We're all developing something. Come join us in making
the future. |
|
|
|
April, 2025
|
|
|
|
|
The HPE Developer Community
|
|
|
Monthly #96
|
|
|
|
|
|
|
|
In this issue, we bring you valuable insights and
practical tips to help you observe, manage, and
optimize your private and hybrid cloud operations,
covering everything from data management to virtual
machine orchestration.
Discover innovative webhook use cases for HPE
GreenLake and learn new techniques to deploy
small language models on HPE Private Cloud AI. We
also examine the evolution, commoditization, and
integration of virtualization into cloud
infrastructures, shedding light on the future of
intelligent private clouds. Additionally, we explore
three prevalent paradigms for managing virtual
machines on Kubernetes.
Stay updated with the latest advancements, including
new releases of HPE iLO and Chapel. Dive into our
April newsletter and make the most of the wealth of
knowledge we have curated for you!
|
|
|
Visit
HPE Developer Community
|
|
|
|
|
|
|
|
|
|
|
|
|
DayN+
A new way to look at
observability
|
|
April 16, 2025
5 p.m. CET/ 8 a.m. PT
|
|
Traditional monitoring systems focus
mainly on metrics, predefined
thresholds, and manual log analysis to
monitor system and application health.
With the integration of cloud
technologies, IT environments have
become significantly more distributed
and complex. Join this session to learn
how an effective observability practice
integrates people, processes, and tools
to foster collaboration, break down
silos, and enable proactive detection,
diagnosis, and resolution of
issues.
|
|
|
|
|
|
|
|
|
|
|
|
HPE
Private Cloud AI Technical Demo
|
|
April 30, 2025
5pm CET / 8am PT
|
|
This technical meetup will delve into
the HPE Private Cloud AI platform,
offering role-specific walkthroughs for
administrators focusing on user and GPU
management, data engineers exploring
data pipeline construction with Airflow
and Spark, and data scientists learning
about model deployment via HPE Machine
Learning Inference Software (MLIS) and
AI Essentials Solution Accelerators.
|
|
|
|
|
|
|
|
|
|
|
|
|
Using structured outputs in vLLM
|
|
|
Looking for greater predictability and reliability
from your large language model (LLM) outputs? See
how structured outputs work in vLLM to enforce
specific formats, regex patterns, and grammars.
|
|
|
|
|
|
|
|
|
|
|
Announcing Chapel 2.4!
|
|
|
This release brings powerful new features, including
multi-dimensional array literals, significantly
improved Python interoperability, and more!
|
|
|
|
|
|
|
|
|