News Archive - Gcore https://gcore.com/news/feed/ Official Gcore CDN and Cloud Blog Wed, 12 Feb 2025 09:49:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 DDoS attack trends Q3–Q4 2024: three key insights from the latest Gcore Radar report https://gcore.com/news/radar-q3-q4-2024-insights/ Wed, 12 Feb 2025 07:00:00 +0000 https://gcore.com/?post_type=news&p=34644 DDoS (distributed denial-of-service) attacks are evolving to become even more sophisticated and dangerous. That’s why it’s vital to keep your digital resources thoroughly protected and stay aware of changes in how cybercriminals are targeting businesses.

The latest Gcore Radar report, covering the second half of 2024, offers insights into the latest attack trends, the industries that are most affected, and a general overview of the cyber threat landscape. These findings underscore the need for advanced, adaptive DDoS protection strategies to counter evolving attack patterns and shifting targets. Here are three key insights from the report, which you can download in full here.

#1. The number and size of DDoS attacks continue to increase

The total number of DDoS attacks increased by 56% compared to Q3–Q4 2024, with the largest attack peaking at 2 Tbps in Q3–Q4 2024—an 18% increase from the previous half year. The chart below shows how the number of attacks has risen over the last year.

There are many reasons for this rise, including the growing availability of tools (including AI), poorly secured IoT devices that make it easier for attackers to build large botnets, and geopolitical tensions worldwide. These factors have made DDoS attacks ever more frequent, complex, and damaging.

#2. The finance and gaming industries are favored targets for hackers

While businesses across all industries need to be on their guard, the finance and gaming sectors are most targeted. The financial services industry saw the most significant rise in attacks with a 117% increase, while gaming remained the most-targeted industry despite a 31% decrease in total share of attacks compared to Q1–Q2 2024. The slight relative decline in gaming-related attacks could be attributed to attackers potentially pivoting to focus on more high-value sectors such as finance.

Since the financial services industry is highly lucrative with a low tolerance for disruption and downtime, it’s seen as a perfect target for extortion opportunities. In Q3–Q4 2024, financial services experienced a significant increase in attacks, accounting for 26% of all DDoS attacks, up from 12% in the previous period.

With an extensive presence across six continents, Gcore can provide precise geographical insights into the origins of all DDoS attack types. Hackers seem to favor locations with dense infrastructure and high connectivity, making proactive defenses in these areas increasingly critical. Countries emerging as key originators for attacks include the Netherlands, the US, Brazil, China, and Indonesia. The chart below shows the leading countries where network-layer attacks originate. To discover where application-layer attacks originate, check out the full report.

How Gcore can help protect your business from DDoS attacks

To stay one step ahead of hackers, it’s vital that every aspect of your business is protected from cyber threats. As DDoS attack methods become more sophisticated and targeted, organizations must adopt a proactive stance in their defense strategies.

No matter where your business operates, Gcore DDoS Protection can keep your business safe and secure. With a vast network capacity and global reach, our solution helps your business continuity and security in a challenging threat landscape. For a deeper dive into the data and trends, download the full Gcore Radar report.

Download the Q3–4 2024 Gcore Radar report

]]>
Introducing low-latency live streams with LL-HLS and LL-DASH https://gcore.com/news/streaming-updates-january-2025/ Tue, 11 Feb 2025 07:00:00 +0000 https://gcore.com/?post_type=news&p=34607 We are thrilled to introduce low-latency live streams for Gcore Video Streaming using LL-HLS and LL-DASH protocols. With a groundbreaking glass-to-glass delay of just 2.2–3.0 seconds, this improvement brings unparalleled speed to your viewers’ live-streaming experience.

Alt: Video illustrating the workflow of low-latency live streaming using LL-HLS and LL-DASH protocols

This demonstration shows the minimal latency of our live streaming solution—just three seconds between the original broadcast (left) and what viewers see online (right).

Key use cases and benefits of low-latency streaming

Our low-latency streaming solutions address the specific needs of content providers, broadcasters, and developers, enabling seamless experiences for diverse use cases.

Ultra-fast live streaming

  • Get real-time delivery with glass-to-glass latency of ±2.2 seconds for LL-DASH and ±3.0 seconds for LL-HLS.
  • Deliver immediate viewer engagement, ideal for industries such as live sports, e-sports tournaments, and news broadcasting.
  • Meet the expectations of audiences who demand instant access to live events without noticeable delays.

Enhanced viewer interaction

  • Reduce the delay between live actions and audience reactions, fostering a more immersive viewing experience.
  • Support real-time interaction for use cases like virtual conferences, live auctions, Q&A sessions, and live shopping platforms.

Flexible player support

  • Seamlessly integrate with your existing player setups, including popular options like hls.js, dash.js, and native Safari support.
  • Use our new HTML web player for effortless integration or maintain your current custom player workflows.

Global scalability and reliability

  • Leverage our robust CDN network with 200+ Tbps capacity and 180+ PoPs to enable low-latency streams worldwide.
  • Deliver a consistent, high-quality experience for global audiences, even during peak traffic events.

Cost-efficiency

  • Minimize operational overhead with a streamlined solution that combines advanced encoding, efficient packaging, and reliable delivery.

How it works

Our real-time transcoder and JIT packager generate streaming manifests and chunks optimized for low latency:

  • For LL-HLS: The HLS manifest (.m3u8) and chunks comply with the latest standards. Tags like #EXT-X-PART, #EXT-X-PRELOAD-HINT, and others are dynamically generated with best-in-class parameters. Chunks are loaded instantaneously as they appear at the origin.
  • For LL-DASH: The DASH manifest (.mpd) leverages advanced MPEG-DASH features. Chunks are transmitted to viewers as soon as encoding begins, with caching finalized once the chunk is fully fetched.

Combined with our fast and reliable CDN delivery, live streams are accessible globally with minimal delay. Our CDN network has an extensive capacity and 180+ PoPs to deliver exceptional performance, even for high-traffic events.

See a live demo in action!

Try WebRTC to HLS/DASH today

We’re also excited to remind you about our WebRTC to HLS/DASH delivery functionality. This innovative feature allows streams created in a standard browser via WebRTC to be:

  1. Transcoded on our servers.
  2. Delivered with low latency to viewers using HTTP-based LL-HLS and LL-DASH protocols through our CDN.

Try it now in the Gcore Customer Portal.

Shaping the future of streaming

By nearly halving the glass-to-glass delivery time compared to our previous solution, Gcore Video Streaming enables you to deliver a seamless experience for live events, real-time interactions, and other latency-sensitive applications. Whether you’re broadcasting to a global audience or engaging niche communities, our platform provides the tools you need to thrive in today’s dynamic streaming landscape.

Watch our demo to see the difference and explore how this solution fits into your workflows.

Visit our demo player

]]>
Announcing a major AI enhancement: Gcore Everywhere Inference https://gcore.com/news/everywhere-inference/ Thu, 16 Jan 2025 07:00:00 +0000 https://gcore.com/?post_type=news&p=33918 We are excited to share a game-changing enhancement to our next-generation AI inference solution Everywhere Inference, formerly known as Inference at the Edge. This update directly responds to challenges enterprises face today, providing the tools needed to overcome obstacles like rising inference demands, operational complexity, and compliance requirements.

With Everywhere Inference, you can now deploy AI inference seamlessly across any environment you choose—whether on-premises, in Gcore’s cloud, public clouds, or in a hybrid configuration. In response to the changing needs of our customers as AI evolves, Everywhere Inference enables flexible, efficient, and optimized inference management, no matter your use case. This exciting news highlights the expanding horizons for AI at Gcore, but what will never change is the steadfast commitment to low latency, scalability, and compliance that you’ve come to expect.

How Gcore Everywhere Inference is transforming AI workloads

Everywhere Inference is designed to give businesses more flexibility and control over their AI workloads. Here’s a breakdown of the latest enhancements.

Smart routing for faster, seamless performance

Workloads are now automatically directed to the nearest available compute resource, delivering low-latency performance even for the most time-sensitive applications. This means that business-critical applications that require accuracy and speed, like real-time fraud detection systems, can now deliver faster responses while maintaining accuracy when it’s needed most.

Multi-tenancy for resource efficiency

With the new multi-tenancy capability, businesses can run multiple AI workloads simultaneously on shared infrastructure. This maximizes resource utilization and reduces operational costs, especially for industries like telecommunications that rely on dynamic network optimization.

Flexible deployment across environments

Deployment flexibility empowers businesses to adapt quickly to changing demands, and seamlessly integrate with existing infrastructure. Whether on-premises, in the Gcore cloud, public clouds, or as a hybrid configuration, Everywhere Inference makes it easy to deploy inference workloads wherever they’re needed.

Ultra-low latency powered by our global network

Leveraging Gcore’s global network with over 180 points of presence (PoPs), businesses can achieve ultra-low latency by processing workloads closer to end users. Our extensive infrastructure enables real-time processing, instant deployment, and seamless performance across the globe.

Dynamic scaling for demand surges

Scaling resources on demand is now faster and more precise, enabling businesses to handle usage spikes without over-provisioning. For businesses that experience peak traffic periods, like retail, dynamic scaling allows you to handle surges while keeping infrastructure costs in check.

Compliance-ready processing

Built with regulatory compliance in mind, Everywhere Inference meets data sovereignty requirements, including GDPR. This makes it an ideal choice for sectors that need to store and protect sensitive data, like healthcare.

The future of AI inference is here

With these enhancements, Gcore Everywhere Inference sets a new standard for AI inference solutions. Whether you’re optimizing real-time applications, scaling rapidly, or navigating complex regulatory environments, Everywhere Inference will drive the speed, efficiency, and flexibility you need in the age of AI.

Discover Everywhere Inference

]]>
Gcore 2024 round-up: 10 highlights from our 10th year https://gcore.com/news/best-of-2024/ Mon, 30 Dec 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=33758 It’s been a busy and exciting year here at Gcore, not least because we celebrated our 10th anniversary back in February. Starting in 2014 with a focus on gaming, Gcore is now a global edge AI, cloud, network, and security solutions provider, supporting businesses from a wide range of industries worldwide.

As we start to look forward to the new year, we took some time to reflect on ten of our highlights from 2024.

1. WAAP launch

In September, we launched our WAAP security solution (web application and API protection) following the acquisition of Stackpath’s edge WAAP. Gcore WAAP is a genuinely innovative product that offers customers DDoS protection, bot management, and a web application firewall, helping protect businesses from the ever-increasing threat of cyber attacks. It brings next-gen AI features to customers while remaining intuitive to use, meaning businesses of all sizes can futureproof their web app and API protection against even the most sophisticated threats.

My highlight of the year was the Stackpath WAAP acquisition, which enabled us to successfully deliver an enterprise-grade web security solution at the edge to our customers in a very short time.

Itamar Eshet, Senior Product Manager, Security

2. Fundraising round: investing in the future

In July, we raised $60m in Series A funding, reflecting investors’ confidence in the continued growth and future of Gcore. Next year will be huge for us in terms of AI development, and this funding will accelerate our growth in this area and allow us to bring even more innovative solutions to our customers.

3. Innovations in AI

In 2024, we upped our AI offerings, including improved AI services for Gcore Video Streaming: AI ASR for transcription and translation, and AI content moderation. As AI is at the forefront of our products and services, we also provided insights into how regulations are changing worldwide and how AI will likely affect all aspects of digital experiences. We already have many new AI developments in the pipeline for 2025, so watch this space…

4. Global expansions

We had some exciting expansions in terms of new cloud capabilities. We expanded our Edge Cloud offerings in new locations, including Vietnam and South Korea, and in Finland, we boosted our Edge AI capabilities with a new AI cluster and two cutting-edge GPUs. Our AI expansion was further bolstered when we introduced the H200 and GB200 in Luxembourg. We also added new PoPs worldwide in locations such as Munich, Riyadh, and Casablanca, demonstrating our dedication to providing reliable and fast content delivery globally.

5. FastEdge launch

We kicked off the year with the launch of FastEdge. This lightweight edge computing solution runs on our global Edge Network and delivers exceptional performance for serverless apps and scripts. This new solution makes handling dynamic content even faster and smoother. We ran an AI image recognition model on FastEdge in an innovative experiment. The Gcore team volunteered their pets to test FastEdge’s performance. Check out the white paper and discover our pets and our technological edge.

6. Partnerships

We formed some exciting global partnerships in 2024. In November, we launched a joint venture with Ezditek, an innovator in data center and digital infrastructure services in Saudi Arabia. The joint venture will build, train, and deploy generative AI solutions locally and globally. We also established some important strategic partnerships. Together with Sesterce, a leading European provider of AI infrastructure, we can help more businesses meet the rising challenges of scaling from AI pilot projects to full-scale implementation. We also partnered with LetzAI, a Luxembourg-based AI startup, to accelerate its mission of developing one of the world’s most comprehensive generative AI platforms.

7. Events

It wasn’t all online. We also ventured out into the real world, making new connections at global technology events, including the WAICF AI conference and Viva Tech in Cannes and Paris, respectively; Mobile World Congress in Barcelona; Gamescom in Cologne in August; IBC (the International Broadcasting Convention) in Amsterdam; and Connected World KSA in Saudi Arabia just last month. We look forward to meeting even more of you next year. Here are a few snapshots from 2024.

Gamescom
IBC

8. New container registry solution

September kicked off with the beta launch of Gcore Container Registry, one of the backbones of our cloud offering. It streamlines your image storage and management, keeping your applications running smoothly and consistently across various environments.

9. GigaOm recognition

Being recognized by outside influences is always a moment to remember. In August, we were thrilled to receive recognition from tech analyst GigaOm, which noted Gcore as an outperformer in its field. The prestigious accolade highlights Gcore as a leader in platform capability, innovation, and market impact, as assessed by GigaOm’s rigorous criteria.

10. New customer success stories

We were delighted to share some of the work we’ve done for our customers this year: gaming company Fawkes Games and Austrian sports broadcaster and streaming platform fan.at, helping them with mitigating DDoS attacks and providing the infrastructure for their sports technology offering respectively.

And as a bonus number 11, if you’re looking for something to read in the new year lull, download our informative long reads on topics including selecting a modern content delivery network, cyber attack trends, and using Kubernetes to enhance AI. Download the ebook of your choice below.

Here’s to 2025!

And that’s it for our 2024 highlights. It’s been a truly remarkable year, and we thank you for being a part of it. We’ll leave you with some words from our CEO and see you in 2025.

2024 has been a year of highs, from our tenth anniversary celebrations to the launch of various new products, and from expansion into new markets to connecting with customers (new and old) at events worldwide. Happy New Year to all our readers who are celebrating, and see you for an even bigger and better 2025!

Andre Reitenbach, CEO

Chat with us about your 2025 needs

]]>
Shaping the future of AI for video streaming in 2025 https://gcore.com/news/streaming-updates-december-2024/ Mon, 16 Dec 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=33690 As we look towards 2025, we’re thrilled to announce major AI updates to enhance your Gcore Video Streaming experience. From transcription and translation to content moderation, here’s what’s new this month.

AI transcription and translation for all

Every Gcore Video Streaming customer can now access automated transcription and translation.

Free universal AI subtitle generation

Starting in December 2024, every video uploaded to Gcore Video Streaming will automatically have subtitles generated in the original audio language thanks to our advanced AI transcription capabilities. This feature supports over 99 languages and can handle:

  • Speech from a single speaker
  • Conversations with multiple speakers
  • Videos featuring multiple languages

These subtitles are applied free and by default, making your content more accessible and engaging for global audiences.

AI subtitle translation

We’ve also introduced a translation feature that allows you to convert subtitles into other languages. The translated subtitles are automatically embedded into your videos and can be accessed directly in the player. This feature helps expand your reach to international viewers seamlessly.

How AI subtitles work

Using these features is simple:

  1. Upload a video to our platform
  2. Copy the player code
  3. Embed the player on your website

For example, here’s how to add a video player to your page:

By following these steps, you can effortlessly incorporate cutting-edge AI features into your video content.

Content moderation at your fingertips

Additionally, Gcore Video Streaming now offers AI content moderation. Detect sensitive content such as NSFW material, nudity, weapons, sports, and more, supporting compliance and brand safety. Learn more about how it works in our API documentation.

Enjoy these AI features today with Gcore Video Streaming

Ready to transform your audio content into valuable insights? Our AI Automatic Speech Recognition (AI ASR) delivers fast, accurate transcriptions tailored to your business needs. Explore how our ASR can enhance your workflows—start your journey today.

If you’re interested in how AI is shaping the future of video, take a look at our blog on key trends for AI in video for 2025.

Discover Gcore AI video streaming features

]]>
The latest Edge AI updates: empowering your AI innovation in 2025 https://gcore.com/news/ai-updates-december-2024/ Tue, 10 Dec 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=33381 As 2024 draws to a close, we’re excited to share significant updates to Gcore Edge AI, which includes Inference at the Edge and GPU Cloud. These updates are designed to enhance operational efficiency, improve compliance with data sovereignty requirements, and provide cost-effective solutions as you scale AI workloads in 2025. Let’s dive into the latest enhancements that set the stage for an exciting 2025.

Partnership with Sesterce

We are thrilled to announce our partnership with Sesterce, which combines Gcore’s Inference at the Edge infrastructure with Sesterce’s industry-specific AI models, training frameworks, and deployment solutions. This collaboration provides businesses with an end-to-end AI platform, simplifying the development, training, and deployment of AI models across cloud, on-premises, and edge environments. By leveraging Gcore’s global edge network for low-latency processing and Sesterce’s expertise in AI applications, customers can accelerate AI adoption, reduce operational complexity, and achieve real-time insights. End users benefit from faster, smarter, and more reliable AI-driven services, tailored to their specific needs. Learn more in our dedicated blog.

Native logs for inference deployment tracking

Operational visibility is a cornerstone of effective AI deployment, and our new Logs for Inference Deployments feature delivers precisely that. With this tool, you can track and analyze model logs directly from the Gcore Customer Portal, enabling you to optimize performance, troubleshoot issues, and gain actionable insights. This capability is available for all deployments, providing an intuitive and centralized way to monitor your AI operations. This feature is available now for all Inference at the Edge customers.

New models available in the Gcore model library

We’ve expanded the Gcore model library, which now includes even more cutting-edge models across domains. These additions mean your AI projects can harness the most advanced AI tools available at the click of a button for creativity, automation, and analytical insights.

Image generation models:

  • Stable Diffusion XL Base 1.0 delivers remarkable image quality and precision.
  • SDXL Lightning is optimized for rapid image generation.
  • Stable Cascade is a versatile model for diverse creative applications.
  • FLUX.1-schnell and FLUX.1-dev offers high-performance options for demanding image tasks.
  • Stable Diffusion 3.5 Large Turbo combines speed and quality.
  • Stable Diffusion 3.5 Large focuses on intricate detail and vibrant outputs.

Reasoning AI models:

  • Mistral-Nemo-Instruct-2407 is designed for handling complex instructional tasks and nuanced responses.
  • Pixtral-12B-2409 acts as a multimodal powerhouse for visual processing.
  • Llama-3.2-1B-Instruct is a lightweight yet efficient model tailored for instructional use cases.
  • Qwen2.5-7B-Instruct and Qwen2-VL-7B-Instruct delivers next-generation vision-language models with exceptional performance.
  • QwQ-32B-Preview offers cutting-edge advancements in large-scale AI modeling, enabling more complex and accurate applications.

Looking Ahead to 2025

These updates represent our commitment to AI innovation and to delivering solutions that empower businesses to thrive in the AI era. From expanding our model library to new partnerships, we’re shaping the future of AI for enterprises worldwide.

Gcore Edge AI has major news coming in early 2025. Be the first to hear about it by subscribing to our newsletter.

Get AI news first with the Gcore newsletter

]]>
Edge Cloud updates for December 2024 https://gcore.com/news/cloud-updates-december-2024/ Tue, 03 Dec 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=33358 We are pleased to introduce the latest enhancements to our Edge Cloud platform, delivering greater flexibility, reliability, and control over your infrastructure. These updates include multiple public IP support for Bare Metal and strengthened anti-abuse measures. Exclusively for new accounts, we’re offering a special promotion for Bare Metal server activations. Find all the details in this blog.

Multiple public IP support for Bare Metal

We’re introducing multiple public IP support for Bare Metal servers on dedicated public subnetworks, adding flexibility and reliability. With this update, you can configure several public IP addresses for seamless service continuity, making your infrastructure more robust. Your services will remain online without interruption with multiple IPs, even if one IP address fails.

This functionality brings significant flexibility to scale your operations effortlessly. It’s particularly useful for handling diverse workloads, traffic routing, and complex hosting environments. It’s also an ideal solution for hypervisor environments where segregating traffic across various IPs is crucial.

Here’s what you need to know to before getting started:

  • This feature works exclusively with a dedicated public subnet.
  • To enable this functionality, please place a request with our support team.
  • The number of supported public IPs is limited by the size of the dedicated subnet assigned to your Bare Metal server.

Please contact our support team to start using multiple public IPs.

Strengthened anti-abuse measures

We’ve introduced new anti-abuse measures to detect and mitigate abusive traffic patterns, enhancing service reliability and protecting your infrastructure from malicious activity. These updates help safeguard your network and achieve consistent application performance.

Get more information in our Product Documentation.

Try Bare Metal with 35% off this month

Gcore Bare Metal servers are the perfect choice for delivering unmatched performance, designed to handle your most demanding workloads. With global availability, they provide a reliable, high-performance, and scalable solution wherever you need them. For a limited time, new customers can enjoy 35% off on High-frequency Bare Metal Servers for two months*.

If you’ve been disappointed by your provider during peak season or you’re looking to scale going into 2025, this is the opportunity for you. Take advantage of the offer by January 7 to secure your discount, available for the first 500 customers.

Unlock the full potential of Edge Cloud

These updates reflect our ongoing commitment to supporting your business with tools and features that address your computing needs. Whether enhancing flexibility, simplifying server management, or improving cost oversight, our Edge Cloud platform is built to help you achieve your goals with confidence.

We invite you to explore these enhancements today and take full advantage of the capabilities now available.

Discover Gcore Bare Metal

* Note: This promotion is available until January 7, 2025. The discount applies for two months from the subscription date and is valid exclusively for new customers activating high-frequency Bare Metal servers. After two months, the discount will be automatically removed. The offer is limited to the first 500 activations.

]]>
Gcore and Sesterce forge strategic partnership to deliver next-gen AI solutions https://gcore.com/news/gcore-sesterce-partnership/ Thu, 28 Nov 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=33324 We are excited to announce our partnership with Sesterce, a leading European provider of AI infrastructure. Together, we are addressing the growing demands of businesses as they transition from piloting AI projects to full-scale deployments.

By combining Sesterce’s high-performance AI infrastructure with Gcore’s edge computing and inference expertise, this collaboration provides enterprises in France and across Europe with seamless solutions to develop, train, and deploy AI applications. With a focus on reducing costs, simplifying complexity, and ensuring compliance, this partnership empowers organizations to scale their AI initiatives efficiently and effectively across on-premises, cloud, and edge environments.

At the forefront of AI infrastructure

Sesterce is among the first in France to offer AI infrastructure powered by NVIDIA H200 GPUs, solidifying its position as a leading provider of AI compute solutions. This capability enables businesses in the region to access state-of-the-art resources for AI training and inference. As more AI projects transition from training to large-scale inference deployments, choosing a partner that can meet the demands of real-time AI has become critical for success.

Real-time performance at the edge

With Gcore’s cutting-edge inference capabilities, businesses can process AI workloads closer to the data source, delivering critical advantages:

  • Instant decision-making: Real-time processing is crucial for applications like fraud detection, predictive maintenance, and customer service.
  • Cost efficiency: By reducing dependency on centralized cloud infrastructure, businesses can significantly lower operational costs.
  • Compliance and operational efficiency: Smart routing technology keeps customer data within the defined region to meet compliance requirements, while model and GPU health checks maintain continuous service for operational efficiency.

Transforming key industries

Together, Gcore and Sesterce unlock groundbreaking potential for businesses in France across a variety of sectors:

  • Healthcare: Revolutionizing diagnostics with real-time AI-powered imaging and patient monitoring systems.
  • Financial Services: Enhancing fraud detection and risk analysis through low-latency, AI-driven insights.
  • Telcos: Improved predictive maintenance and customer experience management.
  • Gaming: Delivering immersive, real-time player experiences with AI-enabled content and analytics.
  • Retail: Optimizing operations and creating personalized shopping experiences at scale.

Unlock the future of AI with Gcore and Sesterce

This partnership reflects a shared commitment to shaping the future of AI adoption in France and beyond. By combining Sesterce’s next-gen AI infrastructure with Gcore’s unparalleled edge AI capabilities, we’re enabling businesses to achieve transformative AI outcomes at scale and with speed.

Ready to harness the power of AI? Explore how Gcore and Sesterce can help you accelerate your journey today.

Discover Gcore Inference at the Edge

]]>
Introducing FastEdge updates for November https://gcore.com/news/november-updates-fastedge/ Tue, 26 Nov 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=32908 This month, we’re bringing improvements to FastEdge, our serverless edge computing solution, that will simplify workflows, enhance security, and streamline application management. FastEdge customers can now access secret storage support, CLI, and configuration templates. We’ve also improved Gcore Customer Portal controls. Here’s what’s new.

Secret storage support

Managing sensitive information is a critical challenge for modern applications. With secret storage support, you now have access to a robust system for encrypting and managing API keys, credentials, tokens, and other sensitive data within your application’s environment variables.

Here’s what this means for you:

  • Enhanced security: Protect sensitive information using advanced encryption techniques, reducing the risk of intentional or accidental data leaks.
  • Streamlined management: Simplify your application configuration with a single, secure workflow for environment variables.
  • Regulatory compliance: Meet stringent data protection and privacy standards, safeguarding your applications against breaches and regulatory penalties.

CLI (Command line interface)

The FastEdge CLI is a powerful tool that enables developers and DevOps teams to interact directly with FastEdge through a command-line interface. This feature streamlines workflows by offering a set of commands to manage deployments, monitor performance, and integrate with your existing CI/CD pipelines. Additionally, it supports local testing, allowing teams to replicate deployment environments and test changes in real time before they go live.

Here’s what this means for you:

  • Efficient automation: Reduce human error by automating updates, scaling, or managing configurations.
  • Seamless integration: Streamline CI/CD pipelines with FastEdge to enable faster development cycles, quicker go-to-market, and reduced overhead.
  • Greater control: Use the CLI to manage settings and deployments, giving developers the flexibility to tailor their processes to their specific needs.
  • Enhanced flexibility: Test and debug your applications locally to validate changes before deployment, reducing risks and ensuring better outcomes.
  • Streamlined development: Simplify routine tasks, enabling your teams to focus on innovation and improving application performance.

If you’d like to learn to use the FastEdge CLI and explore the full documentation, check out the FastEdge CLI GitHub repository.

Templates for rapid configuration

Predefined templates significantly streamline FastEdge service deployment by simplifying configuration for common use cases. These templates deliver several key advantages to users. The prebuilt options for caching, security, and load balancing enable rapid, error-free setup that saves considerable time during deployment. By providing standardized configurations across all deployments, the templates ensure consistency throughout your system, which reduces errors and enhances stability. While offering these standardized benefits, the templates remain flexible and can be readily customized to accommodate your specific requirements. They simplify both scaling and maintenance processes by allowing you to make consistent adjustments across your entire network infrastructure.

Enhanced management in the Gcore Customer Portal

The updated Gcore Customer Portal introduces enhanced tools for managing FastEdge services, offering partners and resellers a more efficient way to control customer settings and troubleshoot issues.

The centralized management interface allows you to swiftly modify customer settings, saving time and simplifying configuration processes across your customer base. When onboarding new customers, you can share custom templates with them for a consistent and streamlined process that accelerates collaboration. The system’s real-time diagnostics and insights enable faster troubleshooting of application issues, helping to minimize any service disruptions. It’s also simple to manage and scale services across multiple customers while maintaining consistent configurations throughout your entire customer base with these improvements.

Stay tuned for further FastEdge updates

These updates, combined with FastEdge, our serverless Edge Computing solution’s upgraded feature set, make it easier than ever to deliver secure, scalable, and high-performing applications. Stay tuned for more enhancements next month!

]]>
Introducing essential countermeasures to protect GTA V FiveM servers from DDoS attacks https://gcore.com/news/fivem-countermeasures/ Tue, 12 Nov 2024 07:00:00 +0000 https://gcore.com/?post_type=news&p=32605 Smooth, uninterrupted gameplay is the ultimate goal for every gaming company and game server owner, but relentless DDoS attacks threaten to disrupt the experience for countless players. Gaming servers are particularly vulnerable because they require low-latency, real-time data exchanges, so even a short disruption can have a severe impact. The gaming industry was the most-targeted sector for DDoS attacks in the first half of 2024, with even downtime costing companies between $25,000 and $40,000 per hour.

Why does FiveM need DDoS countermeasures?

Countermeasures are a specialized component of a DDoS protection solution that accounts for specific gaming servers’ unique characteristics and vulnerabilities. Unlike standard DDoS protection, which may focus on broad filtering techniques, countermeasures are designed to detect, filter, and block unauthorized traffic in real time, guaranteeing that legitimate players can connect without delay or disruption. These targeted defenses are essential in gaming, where they actively prevent malicious traffic from impacting gameplay quality by addressing the specific weaknesses of each server environment.

FiveM is a multiplayer modification for the famous Rockstar Games title Grand Theft Auto V (GTA V) game. FiveM allows users to create custom multiplayer servers with modified game scripts. These modifications enable private role-playing experiences and unique game modes, making them increasingly popular among players with diverse, community-driven content. This customization attracts large, active player bases, making these platforms appealing targets for DDoS attacks.

FiveM requires DDoS countermeasures in addition to standard protection because DDoS attacks targeting its game servers can be particularly sophisticated and persistent. Countermeasures provide an active, adaptive approach that uses specialized algorithms and logic to stop threats specific to these gaming servers. They continuously monitor network activity, block evolving threats, and minimize the disruption to gameplay, maintaining a stable and enjoyable experience for players.

How Gcore’s countermeasures shield FiveM servers

Gcore DDoS prevention countermeasures for FiveM offer specialized and reliable protection through the ENet protocol, a network communication most commonly used for high-performance gaming applications. By guaranteeing that data packets are delivered effectively and in the correct order, the ENet protocol improves reliability, which is crucial for preserving the quality of gameplay.

Here’s how Gcore countermeasures safeguard FiveM servers using ENet and more methods:

  • Game server replacement: Temporarily substitutes the server during the ENet handshake, reinforcing connection security.
  • Passive packet inspection: Verifies incoming packets for ENet protocol compliance.
  • Authorized connections: Whitelists IPs that complete the ENet handshake, guaranteeing only legitimate users access the server.
  • Traffic filtering: Identifies legitimate traffic, guaranteeing genuine users are unaffected by harmful requests.
  • Rate limiting: Caps user requests within specific timeframes, reducing system overload risks from malicious users.

Our solution secures FiveM servers with these countermeasures, enabling players to enjoy uninterrupted and safe gaming experiences.

Building a DDoS-resilient gaming environment

As DDoS attacks become more sophisticated, implementing robust protection measures has become essential for maintaining smooth, uninterrupted gameplay on servers like FiveM. Countermeasures such as traffic filtering, rate limiting, and behavioral analysis can significantly reduce the impact of these attacks, protecting the player experience and guaranteeing server resilience.

Gcore’s advanced, cloud-based, multi-layered security solutions are designed to defend gaming communities against DDoS threats. Read our latest Gcore Radar Report to learn more about the latest DDoS trends and countermeasures.

Discover Gcore DDoS Protection

]]>