openshift route что это

Routes

Overview

A defined route and the endpoints identified by its service can be consumed by a router to provide named connectivity that allows external clients to reach your applications. Each route consists of a route name (limited to 63 characters), service selector, and (optionally) security configuration.

Routers

An OpenShift administrator can deploy routers in an OpenShift cluster, which enable routes created by developers to be used by external clients. The routing layer in OpenShift is pluggable, and two available router plug-ins are provided and supported by default.

OpenShift routers provide external host name mapping and load balancing to services over protocols that pass distinguishing information directly to the router; the host name must be present in the protocol in order for the router to determine where to send it.

Router plug-ins assume they can bind to host ports 80 and 443. This is to allow external traffic to route to the host and subsequently through the router. Routers also assume that networking is configured such that it can access all pods in the cluster.

Routers support the following protocols:

WebSocket traffic uses the same route conventions and supports the same TLS termination types as other traffic.

A router uses the service selector to find the service and the endpoints backing the service. Service-provided load balancing is bypassed and replaced with the router’s own load balancing. Routers watch the cluster API and automatically update their own configuration according to any relevant changes in the API objects. Routers may be containerized or virtual. Custom routers can be deployed to communicate modifications of API objects to an external routing solution.

In order to reach a router in the first place, requests for host names must resolve via DNS to a router or set of routers. The suggested method is to define a cloud domain with a wildcard DNS entry pointing to a virtual IP backed by multiple router instances on designated nodes. Router VIP configuration is described in the Administration Guide. DNS for addresses outside the cloud domain would need to be configured individually. Other approaches may be feasible.

Template Routers

A template router is a type of router that provides certain infrastructure information to the underlying router implementation, such as:

A wrapper that watches endpoints and routes.

Endpoint and route data, which is saved into a consumable form.

Passing the internal state to a configurable template and executing the template.

Calling a reload script.

Available Router Plug-ins

The following router plug-ins are provided and supported in OpenShift. Instructions on deploying these routers are available in Deploying a Router.

HAProxy Template Router

The HAProxy template router implementation is the reference implementation for a template router plug-in. It uses the openshift3/ose-haproxy-router repository to run an HAProxy instance alongside the template router plug-in.

The following diagram illustrates how data flows from the master through the plug-in and finally into an HAProxy configuration:

Sticky Sessions

Implementing sticky sessions is up to the underlying router configuration. The default HAProxy template implements sticky sessions using the balance source directive which balances based on the source IP. In addition, the template router plug-in provides the service name and namespace to the underlying implementation. This can be used for more advanced configuration such as implementing stick-tables that synchronize between a set of peers.

Specific configuration for this router implementation is stored in the haproxy-config.template file located in the /var/lib/haproxy/conf directory of the router container.

F5 Router

The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2.

The F5 router plug-in integrates with an existing F5 BIG-IP® system in your environment. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API. The F5 router supports unsecured, edge terminated, re-encryption terminated, and passthrough terminated routes matching on HTTP vhost and request path.

The F5 router has additional features over the F5 BIG-IP® support in OpenShift Enterprise 2. When comparing with the OpenShift routing-daemon used in earlier versions, the F5 router additionally supports:

path-based routing (using policy rules),

passthrough of encrypted connections (implemented using an iRule that parses the SNI protocol and uses a data group that is maintained by the F5 router for the servername lookup).

Passthrough routes are a special case: path-based routing is technically impossible with passthrough routes because F5 BIG-IP® itself does not see the HTTP request, so it cannot examine the path. The same restriction applies to the template router; it is a technical limitation of passthrough encryption, not a technical limitation of OpenShift.

Routing Traffic to Pods Through the SDN

Because F5 BIG-IP® is external to the OpenShift SDN, a cluster administrator must create a peer-to-peer tunnel between F5 BIG-IP® and a host that is on the SDN, typically an OpenShift node host. This ramp node can be configured as unschedulable for pods so that it will not be doing anything except act as a gateway for the F5 BIG-IP® host. It is also possible to configure multiple such hosts and use the OpenShift ipfailover feature for redundancy; the F5 BIG-IP® host would then need to be configured to use the ipfailover VIP for its tunnel’s remote endpoint.

F5 Integration Details

The operation of the F5 router is similar to that of the OpenShift routing-daemon used in earlier versions. Both use REST API calls to:

create and delete pools,

add endpoints to and delete them from those pools, and

configure policy rules to route to pools based on vhost.

Both also use scp and ssh commands to upload custom TLS/SSL certificates to F5 BIG-IP®.

The F5 router configures pools and policy rules on virtual servers as follows:

When a user creates or deletes a route on OpenShift, the router creates a pool to F5 BIG-IP® for the route (if no pool already exists) and adds a rule to, or deletes a rule from, the policy of the appropriate vserver: the HTTP vserver for non-TLS routes, or the HTTPS vserver for edge or re-encrypt routes. In the case of edge and re-encrypt routes, the router also uploads and configures the TLS certificate and key. The router supports host- and path-based routes.

Passthrough routes are a special case: to support those, it is necessary to write an iRule that parses the SNI ClientHello handshake record and looks up the servername in an F5 data-group. The router creates this iRule, associates the iRule with the vserver, and updates the F5 data-group as passthrough routes are created and deleted. Other than this implementation detail, passthrough routes work the same way as other routes.

When a user creates a service on OpenShift, the router adds a pool to F5 BIG-IP® (if no pool already exists). As endpoints on that service are created and deleted, the router adds and removes corresponding pool members.

When a user deletes the route and all endpoints associated with a particular pool, the router deletes that pool.

Route Host Names

In order for services to be exposed externally, an OpenShift route allows you to associate a service with an externally-reachable host name. This edge host name is then used to route traffic to the service.

When two routes claim the same host, the oldest route wins. If additional routes with different path fields are defined in the same namespace, those paths will be added. If multiple routes with the same path are used, the oldest takes priority.

Источник

OpenShift — базовая концепция

Прежде чем начать с фактической настройки и развертывания приложений, нам необходимо понять некоторые основные термины и понятия, используемые в OpenShift V3.

Контейнеры и изображения

Изображений

Это основные строительные блоки OpenShift, которые формируются из образов Docker. В каждом модуле OpenShift кластер имеет свои собственные изображения, работающие внутри него. Когда мы настраиваем модуль, у нас есть поле, которое будет объединено в реестр. Этот файл конфигурации извлечет образ и развернет его на узле кластера.

Чтобы вытащить и создать из него образ, выполните следующую команду. OC — ​​это клиент для связи со средой OpenShift после входа в систему.

Контейнер

Это создается, когда образ Docker развертывается в кластере OpenShift. При определении любой конфигурации мы определяем секцию контейнера в файле конфигурации. Внутри одного контейнера может быть несколько образов, и все контейнеры, работающие на узле кластера, управляются OpenShift Kubernetes.

Ниже приведены спецификации для определения контейнера, в котором работает несколько изображений.

В приведенной выше конфигурации мы определили многоконтейнерный модуль с двумя образами Tomcat и MongoDB внутри.

Стручки и Услуги

Pod можно определить как коллекцию контейнера и его хранилище внутри узла кластера OpenShift (Kubernetes). В общем, у нас есть два типа контейнеров, начиная с одного контейнера и заканчивая несколькими контейнерами.

Single Container Pod — Они могут быть легко созданы с помощью команды OC или файла базовой конфигурации yml.

Создайте его с помощью простого файла yaml следующим образом.

Как только вышеуказанный файл будет создан, он сгенерирует модуль с помощью следующей команды.

Multi-Container Pod — это несколько контейнеров, в которых у нас работает более одного контейнера. Они создаются с использованием файлов yaml следующим образом.

После создания этих файлов мы можем просто использовать тот же метод, что и выше, для создания контейнера.

Сервис — Поскольку у нас есть набор контейнеров, работающих внутри модуля, таким же образом у нас есть сервис, который можно определить как логический набор модулей. Это абстрактный слой поверх модуля, который предоставляет одно имя IP и DNS, через которое можно получить доступ к модулям. Сервис помогает управлять конфигурацией балансировки нагрузки и очень легко масштабировать модуль. В OpenShift сервис — это объект REST, обожествление которого можно опубликовать в apiService на главном сервере OpenShift для создания нового экземпляра.

Строит и Потоки

Строит

В OpenShift сборка — это процесс преобразования изображений в контейнеры. Это обработка, которая преобразует исходный код в изображение. Этот процесс сборки работает по заранее определенной стратегии построения исходного кода для изображения.

Сборка обрабатывает несколько стратегий и источников.

Стратегии сборки

Source to Image — это в основном инструмент, который помогает в создании воспроизводимых изображений. Эти образы всегда находятся в состоянии готовности к запуску с помощью команды запуска Docker.

Сборка Docker — это процесс, в котором изображения создаются с использованием файла Docker с помощью простой команды сборки Docker.

Custom Build — это сборки, которые используются для создания базовых образов Docker.

Source to Image — это в основном инструмент, который помогает в создании воспроизводимых изображений. Эти образы всегда находятся в состоянии готовности к запуску с помощью команды запуска Docker.

Источник

Routes

Overview

An OpenShift Container Platform route exposes a service at a host name, like www.example.com, so that external clients can reach it by name.

DNS resolution for a host name is handled separately from routing. Your administrator may have configured a DNS wildcard entry that will resolve to the OpenShift Container Platform node that is running the OpenShift Container Platform router. If you are using a different host name you may need to modify its DNS records independently to resolve to the node that is running the router.

Each route consists of a name (limited to 63 characters), a service selector, and an optional security configuration.

Routers

An OpenShift Container Platform administrator can deploy routers to nodes in an OpenShift Container Platform cluster, which enable routes created by developers to be used by external clients. The routing layer in OpenShift Container Platform is pluggable, and two available router plug-ins are provided and supported by default.

See the Installation and Configuration guide for information on deploying a router.

A router uses the service selector to find the service and the endpoints backing the service. When both router and service provide load balancing, OpenShift Container Platform uses the router load balancing. A router detects relevant changes in the IP addresses of its services and adapts its configuration accordingly. This is useful for custom routers to communicate modifications of API objects to an external routing solution.

The path of a request starts with the DNS resolution of a host name to one or more routers. The suggested method is to define a cloud domain with a wildcard DNS entry pointing to one or more virtual IP (VIP) addresses backed by multiple router instances. Routes using names and addresses outside the cloud domain require configuration of individual DNS entries.

When there are fewer VIP addresses than routers, the routers corresponding to the number of addresses are active and the rest are passive. A passive router is also known as a hot-standby router. For example, with two VIP addresses and three routers, you have an «active-active-passive» configuration. See High Availability for more information on router VIP configuration.

Routes can be sharded among the set of routers. Administrators can set up sharding on a cluster-wide basis and users can set up sharding for the namespace in their project. Sharding allows the operator to define multiple router groups. Each router in the group serves only a subset of traffic.

OpenShift Container Platform routers provide external host name mapping and load balancing of service end points over protocols that pass distinguishing information directly to the router; the host name must be present in the protocol in order for the router to determine where to send it.

Router plug-ins assume they can bind to host ports 80 (HTTP) and 443 (HTTPS), by default. This means that routers must be placed on nodes where those ports are not otherwise in use. Alternatively, a router can be configured to listen on other ports by setting the ROUTER_SERVICE_HTTP_PORT and ROUTER_SERVICE_HTTPS_PORT environment variables.

Because a router binds to ports on the host node, only one router listening on those ports can be on each node if the router uses host networking (the default). Cluster networking is configured such that all routers can access all pods in the cluster.

Routers support the following protocols:

WebSocket traffic uses the same route conventions and supports the same TLS termination types as other traffic.

Template Routers

A template router is a type of router that provides certain infrastructure information to the underlying router implementation, such as:

A wrapper that watches endpoints and routes.

Endpoint and route data, which is saved into a consumable form.

Passing the internal state to a configurable template and executing the template.

Calling a reload script.

Available Router Plug-ins

The following router plug-ins are provided and supported in OpenShift Container Platform. Instructions on deploying these routers are available in Deploying a Router.

HAProxy Template Router

The HAProxy template router implementation is the reference implementation for a template router plug-in. It uses the openshift3/ose-haproxy-router repository to run an HAProxy instance alongside the template router plug-in.

The following diagram illustrates how data flows from the master through the plug-in and finally into an HAProxy configuration:

Sticky Sessions

Sticky sessions ensure that all traffic from a user’s session go to the same pod, creating a better user experience. While satisfying the user’s requests, the pod caches data, which can be used in subsequent requests. For example, for a cluster with five back-end pods and two load-balanced routers, you can ensure that the same pod receives the web traffic from the same web browser regardless of the router that handles it.

While returning routing traffic to the same pod is desired, it cannot be guaranteed. However, you can use HTTP headers to set a cookie to determine the pod used in the last connection. When the user sends another request to the application the browser re-sends the cookie and the router knows where to send the traffic.

Cluster administrators can turn off stickiness for passthrough routes separately from other connections, or turn off stickiness entirely.

By default, sticky sessions for passthrough routes are implemented using the source load balancing strategy. The default can be changed for all passthrough routes by using the ROUTER_TCP_BALANCE_SCHEME environment variable, and for individual routes by using the haproxy.router.openshift.io/balance route specific annotation.

Other types of routes use the leastconn load balancing strategy by default.

Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the back-end.

If back-ends change, the traffic could head to the wrong server, making it less sticky, and if you are using a load-balancer (which hides the source IP) the same number is set for all connections and traffic is sent to the same pod.

In addition, the template router plug-in provides the service name and namespace to the underlying implementation. This can be used for more advanced configuration such as implementing stick-tables that synchronize between a set of peers.

Specific configuration for this router implementation is stored in the haproxy-config.template file located in the /var/lib/haproxy/conf directory of the router container. The file may be customized.

Router Environment Variables

For all the items outlined in this section, you can set environment variables in the deployment config for the router to alter its configuration, or use the oc set env command:

DEFAULT_CERTIFICATE

The contents of a default certificate to use for routes that don’t expose a TLS server cert; in PEM format.

DEFAULT_CERTIFICATE_DIR

A path to a directory that contains a file named tls.crt. If tls.crt is not a PEM file which also contains a private key, it is first combined with a file named tls.key in the same directory. The PEM-format contents are then used as the default certificate. Only used if DEFAULT_CERTIFICATE or DEFAULT_CERTIFICATE_PATH are not specified.

DEFAULT_CERTIFICATE_PATH

A path to default certificate to use for routes that don’t expose a TLS server cert; in PEM format. Only used if DEFAULT_CERTIFICATE is not specified.

EXTENDED_VALIDATION

NAMESPACE_LABELS

A label selector to apply to namespaces to watch, empty means all.

PROJECT_LABELS

A label selector to apply to projects to watch, emtpy means all.

RELOAD_SCRIPT

The path to the reload script to use to reload the router.

ROUTER_ALLOWED_DOMAINS

A comma-separated list of domains that the host name in a route can only be part of. Any subdomain in the domain can be used. Option ROUTER_DENIED_DOMAINS overrides any values given in this option. If set, everything outside of the allowed domains will be rejected.

ROUTER_BACKEND_CHECK_INTERVAL

Length of time between subsequent «liveness» checks on backends. (TimeUnits)

ROUTER_COMPRESSION_MIME

«text/html text/plain text/css»

A space separated list of mime types to compress.

ROUTER_DEFAULT_CLIENT_TIMEOUT

Length of time within which a client has to acknowledge or send data. (TimeUnits)

ROUTER_DEFAULT_CONNECT_TIMEOUT

The maximum connect time. (TimeUnits)

ROUTER_DEFAULT_SERVER_TIMEOUT

Length of time within which a server has to acknowledge or send data. (TimeUnits)

ROUTER_DEFAULT_TUNNEL_TIMEOUT

Length of time till which TCP or WebSocket connections will remain open. If you have websockets/tcp connections (and any time HAProxy is reloaded), the old HAProxy processes will «linger» around for that period. (TimeUnits)

ROUTER_DENIED_DOMAINS

ROUTER_ENABLE_COMPRESSION

ROUTER_LOG_LEVEL

The log level to send to the syslog server.

ROUTER_OVERRIDE_HOSTNAME

ROUTER_SERVICE_HTTPS_PORT

Port to listen for HTTPS requests.

ROUTER_SERVICE_HTTP_PORT

Port to listen for HTTP requests.

ROUTER_SERVICE_NAME

The name that the router will identify itself with in route statuses.

ROUTER_SERVICE_NAMESPACE

The namespace the router will identify itself with in route statuses. Required if ROUTER_SERVICE_NAME is used.

ROUTER_SERVICE_NO_SNI_PORT

Internal port for some front-end to back-end communication (see note below).

ROUTER_SERVICE_SNI_PORT

Internal port for some front-end to back-end communication (see note below).

ROUTER_SLOWLORIS_TIMEOUT

Length of time the transmission of an HTTP request can take. (TimeUnits)

ROUTER_SUBDOMAIN

ROUTER_SYSLOG_ADDRESS

Address to send log messages. Disabled if empty.

ROUTER_TCP_BALANCE_SCHEME

ROUTE_LABELS

A label selector to apply to the routes to watch, empty means all.

STATS_PASSWORD

The password needed to access router stats (if the router implementation supports it).

STATS_PORT

Port to expose statistics on (if the router implementation supports it). If not set, stats are not exposed.

STATS_USERNAME

The user name needed to access router stats (if the router implementation supports it).

TEMPLATE_FILE

The path to the HAProxy template file (in the container image).

RELOAD_INTERVAL

The minimum frequency the router is allowed to reload to accept new changes. (TimeUnits)

ROUTER_USE_PROXY_PROTOCOL

ROUTER_ALLOW_WILDCARD_ROUTES

Timeouts

TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days).

The regular expression is: 62*(us\|ms\|s\|m\|h\|d)

Load-balancing Strategy

When a route has multiple endpoints, HAProxy distributes requests to the route among the endpoints based on the selected load-balancing strategy. This applies when no persistence information is available, such as on the first request in a session.

The strategy can be one of the following:

roundrobin : Each endpoint is used in turn, according to its weight. This is the smoothest and fairest algorithm when the server’s processing time remains equally distributed.

leastconn : The endpoint with the lowest number of connections receives the request. Round-robin is performed when multiple endpoints have the same lowest number of connections. Use this algorithm when very long sessions are expected, such as LDAP, SQL, TSE, or others. Not intended to be used with protocols that typically use short sessions such as HTTP.

source : The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up. If the hash result changes due to the number of running servers changing, many clients will be directed to different servers. This algorithm is generally used with passthrough routes.

The ROUTER_TCP_BALANCE_SCHEME environment variable sets the default for passthorugh routes.

HAProxy Strict SNI

By default, when a host does not resolve to a route in a HTTPS or TLS SNI request, the default certificate is returned to the caller as part of the 503 response. This exposes the default certificate and can pose security concerns because the wrong certificate is served for a site. The HAProxy strict-sni option to bind suppresses use of the default certificate.

The option can be set when the router is created or added later.

See Deploying a Customized HAProxy Router to implement new features within the application back-ends, or modify the current operation.

Route Host Names

In order for services to be exposed externally, an OpenShift Container Platform route allows you to associate a service with an externally-reachable host name. This edge host name is then used to route traffic to the service.

When multiple routes from different namespaces claim the same host, the oldest route wins and claims it for the namespace. If additional routes with different path fields are defined in the same namespace, those paths are added. If multiple routes with the same path are used, the oldest takes priority.

A consequence of this behavior is that if you have two routes for a host name: an older one and a newer one. If someone else has a route for the same host name that they created between when you created the other two routes, then if you delete your older route, your claim to the host name will no longer be in effect. The other namespace now claims the host name and your claim is lost.

Источник

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *