soketi
  • 📡soketi
  • 🏆Benchmarks
  • 🎉Support
  • 🤝Contributing
  • 📼Video Courses
  • 😢Known Limitations
  • Getting started
    • ⬆️Upgrading from 0.x
    • 🚀Installation
      • CLI Installation
      • Docker
      • Helm Charts
      • Laravel Sail (Docker)
    • 💿Configuring the server
    • 🔐SSL Configuration
    • 🎨Client Configuration
      • Pusher SDK
      • Laravel Echo
    • 💻Backend Configuration
      • Pusher SDK
      • Laravel Broadcasting
      • Nginx Configuration
    • 🧠Redis Configuration
  • App Management
    • 🎟️Introduction
    • 🧬Array Driver
    • 🛢️SQL Drivers
      • 🐬MySQL
      • 🐘PostgreSQL
      • ⛲Database Pooling
    • 👾DynamoDB
  • Rate Limiting & Limits
    • ⛔Broadcast Rate Limiting
    • 👥Events & Channels Limits
  • Advanced Usage
    • ↔️Horizontal Scaling
      • 🤖Running Modes
      • 🧠Redis Configuration
      • 🧙♂ 🧙♂ 🧙♂ NATS Configuration
      • 🗃️Private Network Configuration
      • 😑Ok, what to choose?
    • 🛑Graceful Shutdowns & Real-time monitoring
    • 📈Prometheus Metrics
    • 🔗HTTP Webhooks
      • 📐AWS Lambda trigger
    • 🕛Queues
      • ⛓️AWS SQS FIFO
      • 🧠Redis
    • 📝Caching
    • ⚛️User Authentication
    • 🤾Enhancing Performance
      • 🔀New traffic redirection
  • Network Watcher
    • 🚀Installation
    • 💿Environment Variables
由 GitBook 提供支持
在本页
在GitHub上编辑
  1. Advanced Usage
  2. Enhancing Performance

New traffic redirection

上一页Enhancing Performance下一页Installation

最后更新于2年前

Usually, on horizontal scalable models, you should know when to scale up or down. Mostly, in Soketi you should do it based on:

  • memory - having low memory can be tough for new sockets as it's needed to store new user data on each new connection and actually having memory to store the pointer to the connection

  • CPU - having low remaining CPU can increase the latency of your connections and mesages

For instances that run in , you can check if they can receive new connections according to a threshold:

# If usage is < 75% of the allowed memory, the endpoint return 200 OK.
SOKETI_HTTP_ACCEPT_TRAFFIC_MEMORY_THRESHOLD=75 soketi start

This threshold can be tweaked by specifying the HTTP_ACCEPT_TRAFFIC_MEMORY_THRESHOLD, which is the percent of memory at which the /accept-traffic request will return a non-200 OK response. This way, you can configure your infrastructure in such a way that you can route the new traffic to instances that return 200 OK instead of pressuring a random algorithm that will not ensure best available memory instances first.

curl -X GET http://localhost:6001/accept-traffic

Alternatively, for Kubernetes there is , which is a sidecar container that will automatically handle the pod labels to redirect traffic to pods that are having enough memory. Network Watcher is not influenced by this particular endpoint

🤾
🔀
Network Watcher
non-worker mode