Atom feed of this document
  
 

 Chapter 16. API Endpoint Configuration Recommendations

This chapter provides recommendations for improving the security of both public and internal endpoints.

 Internal API Communications

OpenStack provides both public facing and private API endpoints. By default, OpenStack components use the publicly defined endpoints. The recommendation is to configure these components to use the API endpoint within the proper security domain.

Services select their respective API endpoints based on the OpenStack service catalog.  The issue here is these services may not obey the listed public or internal API end point values. This can lead to internal management traffic being routed to external API endpoints.

 Configure Internal URLs in Identity Service Catalog

The Identity Service catalog should be aware of your internal URLs. While this feature is not utilized by default, it may be leveraged through configuration. Additionally, it should be forward-compatible with expectant changes once this behavior becomes the default.

To register an internal URL for an endpoint:

 
$ keystone endpoint-create \
 --region RegionOne \
 --service-id=1ff4ece13c3e48d8a6461faebd9cd38f \
 --publicurl='https://public-ip:8776/v1/%(tenant_id)s' \
 --internalurl='https://management-ip:8776/v1/%(tenant_id)s' \
 --adminurl='https://management-ip:8776/v1/%(tenant_id)s'

 Configure Applications for Internal URLs

Some services can be forced to use specific API endpoints.  Therefore, it is recommended that each OpenStack service communicating to the API of another service must be explicitly configured to access the proper internal API endpoint.

Each project may present an inconsistent way of defining target API endpoints. Future releases of OpenStack seek to resolve these inconsistencies through consistent use of the Identity Service catalog.

 Configuration Example #1: Nova

 
[DEFAULT]
cinder_catalog_info='volume:cinder:internalURL'
glance_protocol='https'
neutron_url='https://neutron-host:9696'
neutron_admin_auth_url='https://neutron-host:9696'
s3_host='s3-host'
s3_use_ssl=True

 Configuration Example #2: Cinder

glance_host='https://glance-server'

 Paste and Middleware

Most API endpoints and other HTTP services in OpenStack utilize the Python Paste Deploy library. This is important to understand from a security perspective as it allows for manipulation of the request filter pipeline through the application's configuration. Each element in this chain is referred to as middleware. Changing the order of filters in the pipeline or adding additional middleware may have unpredictable security impact.

It is not uncommon that implementors will choose to add additional middleware to extend OpenStack's base functionality. We recommend implementors make careful consideration of the potential exposure introduced by the addition of non-standard software components to their HTTP request pipeline.

Additional information on Paste Deploy may be found at http://pythonpaste.org/deploy/.

 API Endpoint Process Isolation & Policy

API endpoint processes, especially those that reside within the public security domain should be isolated as much as possible. Where deployments allow, API endpoints should be deployed on separate hosts for increased isolation.

 Namespaces

Many operating systems now provide compartmentalization support. Linux supports namespaces to assign processes into independent domains. System compartmentalization is covered in more detail in other parts of the guide.

 Network Policy

API endpoints typically bridge multiple security domains, as such particular attention should be paid to the compartmentalization of the API processes.  See the Security Domain Bridging section for additional information in this area.

With careful modeling, network ACLs and IDS technologies can be use to enforce explicit point to point communication between network services. As critical cross domain service, this type of explicit enforcement works well for OpenStack's message queue service.

Policy enforcement can be implemented through the configuration of services, host-based firewalls (such as IPTables), local policy (SELinux or AppArmor), and optionally enforced through global network policy.

 Mandatory Access Controls

API endpoint processes should be isolated from each other and other processes on a machine. The configuration for those processes should be restricted to those processes not only by Discretionary Access Controls, but through Mandatory Access Controls. The goal of these enhanced access control is to aid in the containment and escalation of API endpoint security breaches.  With mandatory access controls, such breaches will severely limit access to resources and provide earlier alerting on such events.

Questions? Discuss on ask.openstack.org
Found an error? Report a bug against this page

loading table of contents...