
CODECO is proud to announce the official launch of its Validation and Experimentation Challenges, a key initiative under Work Package 7 (WP7) and aligned with the project’s Innovation and Research Community Engagement Plan (IRCEP).
These challenges are designed to stimulate innovation by enabling external participants to propose and carry out validation and experimentation activities within the CODECO framework. The initiative provides a structured opportunity for participants to engage with cutting-edge technologies and concepts defined by CODECO partners.
Active Challenges
CODECO Data Generator #1
UPRC invites participants to take part in the CODECO Validation and Experimentation Challenge, focused on enhancing synthetic data generation for CRD (Common Resource Description) models. Participants will work with version 1 of the Synthetic Data Generator (SDG v1) in combination with the main version of the Data Generator. The objective is to develop an improved version that integrates the synthetic data generation capabilities of SDG v1 with the real CODECO CRDs provided by the main Data Generator. The goal is to produce synthetic data that accurately mirrors the structure and content of real CODECO CRDs. Participants are expected to submit a comprehensive set of results, including validation metrics and supporting evidence, demonstrating the fidelity and effectiveness of the enhanced synthetic data generation process.
CODECO Data Generator #2
In this challenge, participants are invited to explore, install, and rigorously evaluate the CODECO Data Generator alongside other comparable workload data generation tools. The objective is to perform a detailed, hands-on comparison that highlights the strengths, weaknesses, and limitations of each tool. Through this process, participants will contribute valuable insights into the performance and capabilities of CODECO within real-world or simulated environments.
CODECO Secure Connectivity
As part of the CODECO project’s commitment to advancing secure and efficient connectivity in next-generation communication systems, Universidad Carlos III de Madrid (UC3M) is leading a focused Validation and Experimentation Challenge. This effort aims to test and refine key components of the CODECO architecture through practical, scenario-based evaluation.
The challenge is structured around two main focal actions: the performance evaluation of the Secure Connectivity Component and the implementation of a Passive Monitoring Feature. These activities are designed to provide actionable insights into the robustness, effectiveness, and integration readiness of CODECO technologies.
Upcoming Challenges
CODECO Energy Awareness Strategies
Focus on evaluating the effectiveness of proposed energy-aware scheduling strategies within the CODECO framework. Participants are tasked with comparing their solutions against two key baselines: i) the standard Kubernetes (K8s) scheduler and ii) KEIDS or similar energy-aware scheduling approaches. The challenge will specifically assess the performance of these strategies across several SMART goals.
Applications open in July.
CODECO Resilience strategies evaluation
Aims to evaluate the resilience strategies integrated into the CODECO framework. Participants will assess the effectiveness of their proposed solutions by comparing them against two key baselines: i) CODECO without resilience and ii) the vanilla Kubernetes (K8s) scheduler. The challenge focuses on measuring resilience across several SMART goals.
Applications open in July.
Evaluate the CODECO SWM Scheduler
The CODECO Validation and Experimentation Challenge is designed to assess the performance of CODECOโs graph-based scheduling approach, specifically Seamless Workload Migration (SWM), by comparing it against two key baselines: the vanilla Kubernetes (K8s) scheduler and the Kubernetes network-aware scheduler.
Applications open in July.
CODECO Deployment and Scalability
This CODECO Validation and Experimentation Challenge, led by ICOM, is designed to test the deployment times and scalability of CODECO in real-world scenarios. The goal is to evaluate CODECO’s efficiency and performance across various use cases, focusing on deployment speed and its ability to scale effectively. This challenge offers participants an opportunity to validate CODECO’s capabilities while contributing valuable insights to optimize its deployment framework.
Applications open in July.
Scalability testing of fine tune training for MARL agents
CODECO Validation and Experimentation Challenge on Scalability testing of fine tune training for MARL agents is an initiative designed to evaluate and enhance the performance of Multi-Agent Reinforcement Learning (MARL) systems across complex neighbourhood environments. In this challenge, participants are tasked with developing and submitting solutions that trigger re-training or fine-tuning based on a custom performance metric.
Applications open in August.
Scalability of MARL model over increasing number of clusters
This CODECO Validation and Experimentation Challenge invites participants to evaluate the scalability and performance of Multi-Agent Reinforcement Learning (MARL) systems in increasingly complex neighbourhood environments. Specifically, the focus is on testing MARL performance as the number of clustersโand consequently, the number of communicating and collaborating agentsโgrows. The objective is to identify the scalability limits of the MARL model, assessing at what point increased coordination begins to create bottlenecks within the CODECO system.
Applications open in August.
[CODEF and Benchmarking] Dynamic Stress Testing
In this challenge, coordinated by ATH and UPRC, we aim to rigorously evaluate the resilience and adaptability of CODECO-managed deployments under adverse conditions. Participants will introduce artificial failures and resource bottlenecks using stress testing and chaos engineering toolsโsuch as killing pods, saturating CPUs, or throttling network bandwidth. The objective is to assess how effectively CODECO responds to these disruptions, reconfigures resources, and maintains optimal service levels under pressure.
Applications open in September.
[CODEF and benchmarking] Redeployment Evaluation
In this challenge, organized by ATH and UPRC, participants are tasked with analyzing telemetry data collected over time to anticipate how CODECO will respond to evolving system conditions. Based on observed trends and metrics, they must predict and justify the system’s redeployment decisions under stress or failure scenarios. Participants will then compare their predictions with CODECOโs actual behavior, gaining insights into the system’s decision-making processes and adaptive capabilities.
Applications open in September.
[CODEF and benchmarking] Multi-Cluster Evolution for CODEF
In this advanced challenge, led by ATH and UPRC, participants will explore the capabilities of CODECO as it extends the CODEF framework to orchestrate experiments across multiple Kubernetes clusters. The focus will be on enabling unified control-plane operations and seamless inter-cluster networking. Participants will be tasked with designing and executing experiments that span across clusters, leveraging CODECO to maintain consistency in service deployments and optimize system performance across a distributed environment. This challenge offers a unique opportunity to test the scalability and flexibility of CODECO in complex multi-cluster scenarios.
Applications open in September.
[CODEF and benchmarking] Testing, Debugging and AI-powered assistant
In this challenge, spearheaded by ATH and UPRC, participants will explore how the integration of AI/ML can enhance the CODEF frameworkโs ability to perform testing, live debugging, and self-healing in real-time. The goal is to leverage intelligent automation to improve CODEFโs robustness, enabling faster detection and resolution of issues. By optimizing these capabilities, participants will work towards reducing the Mean Time to Repair (MTTR), ultimately increasing the overall reliability and efficiency of the system. This challenge invites participants to experiment with innovative AI/ML approaches that can transform how CODECO handles system failures and operational anomalies.
Applications open in September.
[CODEF and benchmarking] Security Mechanisms for CODEF
In this challenge, led by ATH and UPRC, participants will focus on proposing and implementing comprehensive end-to-end security mechanisms for the CODEF framework. The objective is to enhance the security posture of CODEF by designing and deploying robust solutions that protect data, services, and system communications across the entire lifecycle. Participants will evaluate and refine their security strategies to ensure that CODEF is resilient against potential vulnerabilities and attacks, while maintaining the integrity and availability of system operations. This challenge presents a unique opportunity to push the boundaries of security within dynamic, distributed environments.
Applications open in September.