<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>terrarum</title>
		<description>terrarum.net</description>
		<link>http://terrarum.net</link>
		<atom:link href="http://terrarum.net/feed.xml" rel="self" type="application/rss+xml" />
		
			<item>
				<title>Building OpenStack Environments Part 3</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;Continuing on with the series of building disposable OpenStack environments for
testing purposes, this part will cover how to install services which are not
supported by PackStack.&lt;/p&gt;

&lt;p&gt;While PackStack does an amazing job at easily and quickly creating OpenStack
environments, it only has the ability to install a
&lt;a href=&quot;https://www.rdoproject.org/rdo/matrix/&quot;&gt;subset&lt;/a&gt; of services under the OpenStack
umbrella. However, almost all OpenStack services are supported by RDO, the
overarching package library for RedHat/CentOS.&lt;/p&gt;

&lt;p&gt;For this part in the series, I will show how to install and configure Designate,
the OpenStack DNS service, using the RDO packages.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#planning-the-installation&quot; id=&quot;markdown-toc-planning-the-installation&quot;&gt;Planning the Installation&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#adding-the-installation-steps&quot; id=&quot;markdown-toc-adding-the-installation-steps&quot;&gt;Adding the Installation Steps&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#installing-powerdns&quot; id=&quot;markdown-toc-installing-powerdns&quot;&gt;Installing PowerDNS&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#installing-designate&quot; id=&quot;markdown-toc-installing-designate&quot;&gt;Installing Designate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#build-the-image-and-launch&quot; id=&quot;markdown-toc-build-the-image-and-launch&quot;&gt;Build the Image and Launch&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;planning-the-installation&quot;&gt;Planning the Installation&lt;/h2&gt;

&lt;p&gt;PackStack spoils us by hiding all of the steps required to install an OpenStack
service. Installing a service requires a good amount of planning, even if the
service is only going to be used for testing rather than production.&lt;/p&gt;

&lt;p&gt;To begin planning, first read over any documentation you can find about the
service in question. For Designate, there is a good amount of documentation
&lt;a href=&quot;https://docs.openstack.org/designate/latest/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://docs.openstack.org/designate/latest/install/get_started.html&quot;&gt;overview&lt;/a&gt;
page shows that there are a lot of moving pieces to Designate. Whether or not
you need to account for all of these is still in question since it's possible
that the RDO packages provide some sort of base configuration.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://docs.openstack.org/designate/latest/install/install-rdo.html&quot;&gt;installation&lt;/a&gt;
page gives some brief steps about how to install Designate. By reading the page,
you can see that all of the services listed in the Overview do not require
special configuration. This makes things more simple.&lt;/p&gt;

&lt;p class=&quot;alert alert-warn&quot;&gt;Keep in mind that if you were to deploy Designate for production use, you might
have to tune all of these services to suit your environment. Determining how to
tune these services is out of scope for this blog post. Usually it requires
careful reading of the various Designate configuration files, looking for
supplementary information on mailing lists, and often even reading the source
code itself.&lt;/p&gt;

&lt;p&gt;The installation page shows how to use BIND as the default DNS driver. However,
I'm going to change things up here. Instead, I will show how to use PowerDNS.
There are two reasons for this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;I'm allergic to BIND.&lt;/li&gt;
  &lt;li&gt;I had trouble piecing together everything required to run Designate with the
new PowerDNS driver, so this will also serve as documentation to help others.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;adding-the-installation-steps&quot;&gt;Adding the Installation Steps&lt;/h2&gt;

&lt;p&gt;Continuing with Part 1 and Part 2, you should have a directory called
&lt;code class=&quot;highlighter-rouge&quot;&gt;terraform-openstack-test&lt;/code&gt; on your workstation. The structure of the directory
should look something like this:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;tree &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
├── files
│   └── deploy.sh
├── key
│   ├── id_rsa
│   └── id_rsa.pub
├── main.tf
└── packer
    ├── files
    │   ├── deploy.sh
    │   ├── packstack-answers.txt
    │   └── rc.local
    └── openstack
        ├── build.json
        └── main.tf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; is used to install and configure PackStack and then strip any unique
information from the installation. Packer then makes an image out of this
installation. Finally, &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt; does some post-boot configuration.&lt;/p&gt;

&lt;p&gt;To install and configure Designate, you will want to add additional pieces to
both &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;installing-powerdns&quot;&gt;Installing PowerDNS&lt;/h3&gt;

&lt;p&gt;First, install and configure PowerDNS. To do this, add the following to
&lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_admin

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_demo

&lt;span class=&quot;gi&quot;&gt;+ mysql -e &quot;CREATE DATABASE pdns default character set utf8 default collate utf8_general_ci&quot;
+ mysql -e &quot;GRANT ALL PRIVILEGES ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY 'password'&quot;
+
+ yum install -y epel-release yum-plugin-priorities
+ curl -o /etc/yum.repos.d/powerdns-auth-40.repo https://repo.powerdns.com/repo-files/centos-auth-40.repo
+ yum install -y pdns pdns-backend-mysql
+
+ echo &quot;daemon=no
+ allow-recursion=127.0.0.1
+ config-dir=/etc/powerdns
+ daemon=yes
+ disable-axfr=no
+ guardian=yes
+ local-address=0.0.0.0
+ local-ipv6=::
+ local-port=53
+ setgid=pdns
+ setuid=pdns
+ slave=yes
+ socket-dir=/var/run
+ version-string=powerdns
+ out-of-zone-additional-processing=no
+ webserver=yes
+ api=yes
+ api-key=someapikey
+ launch=gmysql
+ gmysql-host=127.0.0.1
+ gmysql-user=pdns
+ gmysql-dbname=pdns
+ gmysql-password=password&quot; | tee /etc/pdns/pdns.conf
+
+ mysql pdns &amp;lt; /home/centos/files/pdns.sql
+ sudo systemctl restart pdns
&lt;/span&gt;
  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval &quot;$(/usr/local/bin/gimme 1.8)&quot;
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat &amp;gt;&amp;gt; /root/.bashrc &amp;lt;&amp;lt;EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval &quot;\$(/usr/local/bin/gimme 1.8)&quot;
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] &amp;amp;&amp;amp; [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\$1&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service

  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; cinder
  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; nova
  mysql -e &quot;update compute_nodes set deleted_at=now(), deleted=id&quot; nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

  cp /home/centos/files/rc.local /etc
  chmod +x /etc/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are four things begin done above:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A MySQL database is created for PowerDNS.&lt;/li&gt;
  &lt;li&gt;PowerDNS is then installed.&lt;/li&gt;
  &lt;li&gt;A configuration file is created.&lt;/li&gt;
  &lt;li&gt;A database schema is imported into the PowerDNS database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll notice the schema is located in a file titled &lt;code class=&quot;highlighter-rouge&quot;&gt;files/pdns.sql&lt;/code&gt;. Add the
following to &lt;code class=&quot;highlighter-rouge&quot;&gt;terraform-openstack-test/packer/files/pdns.sql&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-sql highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domains&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;master&lt;/span&gt;                &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;128&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;last_check&lt;/span&gt;            &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;type&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;6&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;notified_serial&lt;/span&gt;       &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;account&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;40&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;UNIQUE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;name_index&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domains&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;records&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;BIGINT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;type&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;64000&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;ttl&lt;/span&gt;                   &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;prio&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;change_date&lt;/span&gt;           &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;disabled&lt;/span&gt;              &lt;span class=&quot;n&quot;&gt;TINYINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;ordername&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;BINARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;auth&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;TINYINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DEFAULT&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;nametype_index&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;records&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;records&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;recordorder&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;records&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ordername&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;supermasters&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;ip&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;64&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;nameserver&lt;/span&gt;            &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;account&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;40&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ip&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;nameserver&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;type&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;modified_at&lt;/span&gt;           &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;account&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;40&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;comment&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;64000&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments_domain_id_idx&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments_name_type_idx&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments_order_idx&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;comments&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;modified_at&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domainmetadata&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;kind&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;32&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;TEXT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domainmetadata_idx&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domainmetadata&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;kind&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cryptokeys&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;flags&lt;/span&gt;                 &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NULL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;active&lt;/span&gt;                &lt;span class=&quot;n&quot;&gt;BOOL&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;content&lt;/span&gt;               &lt;span class=&quot;n&quot;&gt;TEXT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;domainidindex&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;cryptokeys&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;domain_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;


&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tsigkeys&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;                    &lt;span class=&quot;n&quot;&gt;INT&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AUTO_INCREMENT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;                  &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;algorithm&lt;/span&gt;             &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;50&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;secret&lt;/span&gt;                &lt;span class=&quot;n&quot;&gt;VARCHAR&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;255&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;PRIMARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Engine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;InnoDB&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;UNIQUE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INDEX&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;namealgoindex&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;ON&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tsigkeys&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;algorithm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;installing-designate&quot;&gt;Installing Designate&lt;/h3&gt;

&lt;p&gt;Now that &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; will install and configure PowerDNS, add the steps to
install and configure Designate:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_admin

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_demo

  mysql -e &quot;CREATE DATABASE pdns default character set utf8 default collate utf8_general_ci&quot;
  mysql -e &quot;GRANT ALL PRIVILEGES ON pdns.* TO 'pdns'@'localhost' IDENTIFIED BY 'password'&quot;

  yum install -y epel-release yum-plugin-priorities
  curl -o /etc/yum.repos.d/powerdns-auth-40.repo https://repo.powerdns.com/repo-files/centos-auth-40.repo
  yum install -y pdns pdns-backend-mysql

  echo &quot;daemon=no
  allow-recursion=127.0.0.1
  config-dir=/etc/powerdns
  daemon=yes
  disable-axfr=no
  guardian=yes
  local-address=0.0.0.0
  local-ipv6=::
  local-port=53
  setgid=pdns
  setuid=pdns
  slave=yes
  socket-dir=/var/run
  version-string=powerdns
  out-of-zone-additional-processing=no
  webserver=yes
  api=yes
  api-key=someapikey
  launch=gmysql
  gmysql-host=127.0.0.1
  gmysql-user=pdns
  gmysql-dbname=pdns
  gmysql-password=password&quot; | tee /etc/pdns/pdns.conf

  mysql pdns &amp;lt; /home/centos/files/pdns.sql
  sudo systemctl restart pdns

&lt;span class=&quot;gi&quot;&gt;+ openstack user create --domain default --password password designate
+ openstack role add --project services --user designate admin
+ openstack service create --name designate --description &quot;DNS&quot; dns
+ openstack endpoint create --region RegionOne dns public http://127.0.0.1:9001/
+
+ mysql -e &quot;CREATE DATABASE designate CHARACTER SET utf8 COLLATE utf8_general_ci&quot;
+ mysql -e &quot;CREATE DATABASE designate_pool_manager&quot;
+ mysql -e &quot;GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'localhost' IDENTIFIED BY 'password'&quot;
+ mysql -e &quot;GRANT ALL PRIVILEGES ON designate_pool_manager.* TO 'designate'@'localhost' IDENTIFIED BY 'password'&quot;
+ mysql -e &quot;GRANT ALL PRIVILEGES ON designate.* TO 'designate'@'localhost' IDENTIFIED BY 'password'&quot;
+
+ yum install -y crudini
+
+ yum install -y openstack-designate\*
+
+ cp /home/centos/files/pools.yaml /etc/designate/
+
+ designate_conf=&quot;/etc/designate/designate.conf&quot;
+ crudini --set $designate_conf DEFAULT debug True
+ crudini --set $designate_conf DEFAULT debug True
+ crudini --set $designate_conf DEFAULT notification_driver messaging
+ crudini --set $designate_conf service:api enabled_extensions_v2 &quot;quotas, reports&quot;
+ crudini --set $designate_conf keystone_authtoken auth_uri http://127.0.0.1:5000
+ crudini --set $designate_conf keystone_authtoken auth_url http://127.0.0.1:35357
+ crudini --set $designate_conf keystone_authtoken username designate
+ crudini --set $designate_conf keystone_authtoken password password
+ crudini --set $designate_conf keystone_authtoken project_name services
+ crudini --set $designate_conf keystone_authtoken auth_type password
+ crudini --set $designate_conf service:worker enabled true
+ crudini --set $designate_conf service:worker notify true
+ crudini --set $designate_conf storage:sqlalchemy connection mysql+pymysql://designate:password@127.0.0.1/designate
+
+ sudo -u designate designate-manage database sync
+
+ systemctl enable designate-central designate-api
+ systemctl enable designate-worker designate-producer designate-mdns
+ systemctl restart designate-central designate-api
+ systemctl restart designate-worker designate-producer designate-mdns
+
+ sudo -u designate designate-manage pool update
&lt;/span&gt;
  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval &quot;$(/usr/local/bin/gimme 1.8)&quot;
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat &amp;gt;&amp;gt; /root/.bashrc &amp;lt;&amp;lt;EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval &quot;\$(/usr/local/bin/gimme 1.8)&quot;
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] &amp;amp;&amp;amp; [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\$1&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service
&lt;span class=&quot;gi&quot;&gt;+ systemctl stop designate-central designate-api
+ systemctl stop designate-worker designate-producer designate-mdns
&lt;/span&gt;
  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; cinder
  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; nova
  mysql -e &quot;update compute_nodes set deleted_at=now(), deleted=id&quot; nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

  cp /home/centos/files/rc.local /etc
  chmod +x /etc/rc.local
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are several steps happening above:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;openstack&lt;/code&gt; command is used to create a new service account called
&lt;code class=&quot;highlighter-rouge&quot;&gt;designate&lt;/code&gt;. A catalog endpoint is also created.&lt;/li&gt;
  &lt;li&gt;A database called &lt;code class=&quot;highlighter-rouge&quot;&gt;designate&lt;/code&gt; is created.&lt;/li&gt;
  &lt;li&gt;A utility called &lt;a href=&quot;http://www.pixelbeat.org/programs/crudini/&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;crudini&lt;/code&gt;&lt;/a&gt; is
installed. This is an amazing little tool to help modify &lt;code class=&quot;highlighter-rouge&quot;&gt;ini&lt;/code&gt; files on the
command-line.&lt;/li&gt;
  &lt;li&gt;Designate is installed.&lt;/li&gt;
  &lt;li&gt;A bundled &lt;code class=&quot;highlighter-rouge&quot;&gt;pools.yaml&lt;/code&gt; file is copied to &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/designate&lt;/code&gt;. I'll show the
contents of this file soon.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;crudini&lt;/code&gt; is used to configure &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/designate/designate.conf&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;The Designate database's schema is imported using the &lt;code class=&quot;highlighter-rouge&quot;&gt;designate-manage&lt;/code&gt;
command.&lt;/li&gt;
  &lt;li&gt;The Designate services are enabled in &lt;code class=&quot;highlighter-rouge&quot;&gt;systemd&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;designate-manage&lt;/code&gt; is again used, this time to update the DNS pools.&lt;/li&gt;
  &lt;li&gt;The Designate services are added to the list of services to stop before
the image/snapshot is created.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These steps roughly follow what was pulled from the Designate Installation
Guide linked to earlier.&lt;/p&gt;

&lt;p&gt;As mentioned, a &lt;code class=&quot;highlighter-rouge&quot;&gt;pools.yaml&lt;/code&gt; file is copied from the &lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt; directory. Create
a file called &lt;code class=&quot;highlighter-rouge&quot;&gt;terraform-openstack-test/packer/files/pools.yaml&lt;/code&gt; with the
following contents:&lt;/p&gt;

&lt;div class=&quot;language-yaml highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;

&lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;default&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;Default PowerDNS Pool&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;attributes&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;{}&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ns_records&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;hostname&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;ns.example.com.&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;priority&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;1&lt;/span&gt;

  &lt;span class=&quot;na&quot;&gt;nameservers&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;127.0.0.1&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;53&lt;/span&gt;

  &lt;span class=&quot;na&quot;&gt;targets&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;type&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;pdns4&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;description&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;PowerDNS4 DNS Server&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;masters&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;127.0.0.1&lt;/span&gt;
          &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;5354&lt;/span&gt;

      &lt;span class=&quot;c1&quot;&gt;# PowerDNS Configuration options&lt;/span&gt;
      &lt;span class=&quot;na&quot;&gt;options&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;host&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;127.0.0.1&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;port&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;53&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;api_endpoint&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;http://127.0.0.1:8081&lt;/span&gt;
        &lt;span class=&quot;na&quot;&gt;api_token&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;someapikey&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, modify the &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt; file:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  #!/bin/bash
  set -x

  export HOME=/root

  sleep 60

  public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4/)
  if [[ -n $public_ip ]]; then
    while true ; do
      mysql -e &quot;update endpoint set url = replace(url, '127.0.0.1', '$public_ip')&quot; keystone
      if [[ $? == 0 ]]; then
        break
      fi
      sleep 10
    done

    sed -i -e &quot;s/127.0.0.1/$public_ip/g&quot; /root/keystonerc_demo
    sed -i -e &quot;s/127.0.0.1/$public_ip/g&quot; /root/keystonerc_admin
  fi

  systemctl restart rabbitmq-server
  while [[ true ]]; do
    pgrep -f rabbit
    if [[ $? == 0 ]]; then
      break
    fi
    sleep 10
    systemctl restart rabbitmq-server
  done

  systemctl restart openstack-cinder-api.service
  systemctl restart openstack-cinder-backup.service
  systemctl restart openstack-cinder-scheduler.service
  systemctl restart openstack-cinder-volume.service
  systemctl restart openstack-nova-cert.service
  systemctl restart openstack-nova-compute.service
  systemctl restart openstack-nova-conductor.service
  systemctl restart openstack-nova-consoleauth.service
  systemctl restart openstack-nova-novncproxy.service
  systemctl restart openstack-nova-scheduler.service
  systemctl restart neutron-dhcp-agent.service
  systemctl restart neutron-l3-agent.service
  systemctl restart neutron-lbaasv2-agent.service
  systemctl restart neutron-metadata-agent.service
  systemctl restart neutron-openvswitch-agent.service
  systemctl restart neutron-metering-agent.service
  systemctl restart httpd
&lt;span class=&quot;gi&quot;&gt;+ systemctl restart designate-central designate-api
+ systemctl restart designate-worker designate-producer designate-mdns
+ systemctl restart pdns
&lt;/span&gt;
  nova-manage cell_v2 discover_hosts

&lt;span class=&quot;gi&quot;&gt;+ sudo -u designate designate-manage pool update
+
+ iptables -I INPUT -p tcp --dport 9001 -j ACCEPT
+ ip6tables -I INPUT -p tcp --dport 9001 -j ACCEPT
+
&lt;/span&gt;  iptables -I INPUT -p tcp --dport 80 -j ACCEPT
  ip6tables -I INPUT -p tcp --dport 80 -j ACCEPT
  cp /root/keystonerc* /var/www/html
  chmod 666 /var/www/html/keystonerc*
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above steps have been added:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The Designate services have been added to the list of services to
be restarted during boot.&lt;/li&gt;
  &lt;li&gt;PowerDNS is also restarted&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;designate-manage&lt;/code&gt; is again used to update the DNS pools.&lt;/li&gt;
  &lt;li&gt;Port 9001 is opened for traffic.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;build-the-image-and-launch&quot;&gt;Build the Image and Launch&lt;/h2&gt;

&lt;p&gt;With the above in place, you can regenerate your image using Packer and then
launch a virtual machine using Terraform.&lt;/p&gt;

&lt;p&gt;When the virtual machine is up and running, you'll find that your testing
environment is now running OpenStack Designate.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This blog post covered how to add a service to your OpenStack testing
environment that is not supported by PackStack. This was done by reviewing the
steps to manually install and configure the service, translating those steps
to automated commands, and adding those commands to the existing deployment
scripts.&lt;/p&gt;
</description>
				<pubDate>Mon, 09 Oct 2017 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/building-openstack-environments-3.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/building-openstack-environments-3.html</guid>
			</item>
		
			<item>
				<title>Building OpenStack Environments Part 2</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;In the &lt;a href=&quot;http://terrarum.net/blog/building-openstack-environments.html&quot;&gt;last post&lt;/a&gt;,
I detailed how to create an all-in-one OpenStack environment in an isolated
virtual machine for the purpose of testing OpenStack-based applications.&lt;/p&gt;

&lt;p&gt;In this post, I'll cover how to create an image from the environment. This
will allow you to launch virtual machines which already have the OpenStack
environment installed and running. The benefits of this approach is that it
reduces the time required to build the environment as well as pins the
environment to a known working version.&lt;/p&gt;

&lt;p&gt;In addition, I'll cover how to modify the all-in-one environment so that it can
be accessed remotely. This way, testing does not have to be done locally on
the virtual machine.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Note: I realize the title of this series might be a misnomer. This series is
is not covering how to deploy OpenStack &lt;em&gt;in general&lt;/em&gt;, but how to set up
disposable OpenStack environments for testing purposes. Blame line wrapping.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-to-generate-the-image&quot; id=&quot;markdown-toc-how-to-generate-the-image&quot;&gt;How to Generate the Image&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#creating-a-reusable-openstack-image&quot; id=&quot;markdown-toc-creating-a-reusable-openstack-image&quot;&gt;Creating a Reusable OpenStack Image&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#creating-a-simple-image&quot; id=&quot;markdown-toc-creating-a-simple-image&quot;&gt;Creating a Simple Image&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#generating-an-answer-file&quot; id=&quot;markdown-toc-generating-an-answer-file&quot;&gt;Generating an Answer File&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#installing-openstack&quot; id=&quot;markdown-toc-installing-openstack&quot;&gt;Installing OpenStack&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#removing-unique-data&quot; id=&quot;markdown-toc-removing-unique-data&quot;&gt;Removing Unique Data&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#adding-an-rclocal-file&quot; id=&quot;markdown-toc-adding-an-rclocal-file&quot;&gt;Adding an rc.local File&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#using-the-image&quot; id=&quot;markdown-toc-using-the-image&quot;&gt;Using the Image&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;how-to-generate-the-image&quot;&gt;How to Generate the Image&lt;/h2&gt;

&lt;p&gt;AWS and OpenStack (and any other cloud provider) provide the ability to create
an image (whether AMI, qcow, etc) from a running virtual machine. This is
commonly known as &quot;snapshotting&quot;.&lt;/p&gt;

&lt;p&gt;The process described here will use snapshotting, but it's not &lt;em&gt;that&lt;/em&gt; simple.
OpenStack has a lot of moving pieces and some of those pieces are
dependent on unique configurations of the host: the hostname, the IP
address(es), etc. These items must be accounted for and configured correctly
on the new virtual machine.&lt;/p&gt;

&lt;p&gt;With this in mind, the process of generating an image is roughly:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Launch a virtual machine.&lt;/li&gt;
  &lt;li&gt;Install an all-in-one OpenStack environment.&lt;/li&gt;
  &lt;li&gt;Remove any unique information from the OpenStack databases.&lt;/li&gt;
  &lt;li&gt;Snapshot.&lt;/li&gt;
  &lt;li&gt;Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;creating-a-reusable-openstack-image&quot;&gt;Creating a Reusable OpenStack Image&lt;/h2&gt;

&lt;p&gt;Just like in Part 1, it's best to ensure this entire process is automated.
Terraform works great to provision and deploy infrastructure, but it is not
suited to provide a niche task such as snapshotting.&lt;/p&gt;

&lt;p&gt;Fortunately, there's &lt;a href=&quot;https://www.packer.io/&quot;&gt;Packer&lt;/a&gt;. And even more fortunate
is that Packer supports a wide array of cloud services.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;If you haven't used Packer before, I recommend going through the
&lt;a href=&quot;https://www.packer.io/intro/index.html&quot;&gt;intro&lt;/a&gt; before proceeding here.&lt;/p&gt;

&lt;p&gt;In Part 1, I used AWS as the cloud being deployed to. For this part, I'll
switch things up and use an OpenStack cloud.&lt;/p&gt;

&lt;h3 id=&quot;creating-a-simple-image&quot;&gt;Creating a Simple Image&lt;/h3&gt;

&lt;p&gt;To begin, you can continue using the same &lt;code class=&quot;highlighter-rouge&quot;&gt;terraform-openstack-test&lt;/code&gt; directory
that was used in Part 1.&lt;/p&gt;

&lt;p&gt;First, create a new directory called &lt;code class=&quot;highlighter-rouge&quot;&gt;packer/openstack&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir &lt;span class=&quot;nt&quot;&gt;-p&lt;/span&gt; packer/openstack
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;packer/openstack
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create a file called &lt;code class=&quot;highlighter-rouge&quot;&gt;build.json&lt;/code&gt; with the following contents:&lt;/p&gt;

&lt;div class=&quot;language-json highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;builders&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[{&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;type&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;openstack&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;image_name&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;packstack-ocata&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;reuse_ips&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;ssh_username&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;centos&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;flavor&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{user `flavor`}}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;security_groups&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{user `secgroup`}}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;source_image&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{user `image_id`}}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;floating_ip_pool&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{user `pool`}}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
    &lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;networks&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;w&quot;&gt; &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;{{user `network_id`}}&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
  &lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}]&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;
&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;w&quot;&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;I've broken the above into two sections: the top section has hard-coded values
while the bottom section requires input on the command-line. This is because
the values will vary between your OpenStack cloud and my OpenStack cloud.&lt;/p&gt;

&lt;p&gt;With the above in place, run:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;packer build &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'flavor=m1.large'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'secgroup=AllowAll'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'pool=public'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    build.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Note the following: the &lt;code class=&quot;highlighter-rouge&quot;&gt;image_id&lt;/code&gt; must be a CentOS 7 image and the Security
Group must allow traffic from your workstation to Port 22.&lt;/p&gt;

&lt;p&gt;This command will take some time to complete. When it has finished, it will
print the UUID of a newly generated image:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;==&amp;gt; Builds finished. The artifacts of successful builds are:
--&amp;gt; openstack: An image was created: 53ecc829-60c0-4a87-81f4-9fc603ff2a8f
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That UUID will point to an image titled &quot;packstack-ocata&quot;.&lt;/p&gt;

&lt;p&gt;Congratulations! You just created an image.&lt;/p&gt;

&lt;p&gt;However, there is virtually nothing different about &quot;packstack-ocata&quot; and the
CentOS image used to create it. All Packer did was launch a virtual machine
and create a snapshot of it.&lt;/p&gt;

&lt;p&gt;In order for Packer to make changes to the virtual machine, you must configure
&quot;provisioners&quot; in the &lt;code class=&quot;highlighter-rouge&quot;&gt;build.json&lt;/code&gt; file. Provisioners are just like Terraform's
concept of provisioners: steps that will execute commands on the running virtual
machine. Before you can add some provisioners to the Packer build file, you
first need to generate the scripts which will be run.&lt;/p&gt;

&lt;h3 id=&quot;generating-an-answer-file&quot;&gt;Generating an Answer File&lt;/h3&gt;

&lt;p&gt;In Part 1, PackStack was used to install an all-in-one OpenStack environment.
The command used was:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;packstack &lt;span class=&quot;nt&quot;&gt;--allinone&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This very simple command will use a lot of sane defaults and the result will
be a fully functionall all-in-one environment.&lt;/p&gt;

&lt;p&gt;However, in order to more easily make the OpenStack environment run correctly
each time a virtual machine is created, the installation needs tuned. To do
this, a custom &quot;answer file&quot; will be used when running PackStack.&lt;/p&gt;

&lt;p&gt;An answer file is a file which contains each configurable setting within
PackStack. This file is very large with lots of options. It's not something
you want to write from scratch. Instead, PackStack can generate an answer file
to be used as a template.&lt;/p&gt;

&lt;p&gt;On a CentOS 7 virtual machine, which can even be the same virtual machine you
created in Part 1, run:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;packstack &lt;span class=&quot;nt&quot;&gt;--gen-answer-file&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;packstack-answers.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Copy the file to your workstation using &lt;code class=&quot;highlighter-rouge&quot;&gt;scp&lt;/code&gt; or some other means. Make a
directory called &lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt; to store this answer file:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir packer/files
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;scp &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; key/id_rsa centos@&amp;lt;ip&amp;gt;:packstack-answers.txt packer/files
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Once stored locally, make the following changes:&lt;/p&gt;

&lt;p&gt;First, locate the setting &lt;code class=&quot;highlighter-rouge&quot;&gt;CONFIG_CONTROLLER_HOST&lt;/code&gt;. This setting will have the
value of an IP address local to the virtual machine which generated this file:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;CONFIG_CONTROLLER_HOST=10.41.8.200
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Do a global search and replace of &lt;code class=&quot;highlighter-rouge&quot;&gt;10.41.8.200&lt;/code&gt; with &lt;code class=&quot;highlighter-rouge&quot;&gt;127.0.0.1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, use this opportunity to tune which services you want to enable for your
test environment. For example:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_HORIZON_INSTALL=y
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_HORIZON_INSTALL=n
&lt;/span&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_CEILOMETER_INSTALL=y
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_CEILOMETER_INSTALL=n
&lt;/span&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_AODH_INSTALL=y
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_AODH_INSTALL=n
&lt;/span&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_GNOCCHI_INSTALL=y
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_GNOCCHI_INSTALL=n
&lt;/span&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_LBAAS_INSTALL=n
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_LBAAS_INSTALL=y
&lt;/span&gt;&lt;span class=&quot;gd&quot;&gt;- CONFIG_NEUTRON_FWAAS=n
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ CONFIG_NEUTRON_FWAAS=y
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;These are all services I have personally changed the status of since either they
are disabled by default and I want them enabled or they are enabled by default
and I do not need them. Change the values to suit your needs.&lt;/p&gt;

&lt;p class=&quot;alert alert-warn&quot;&gt;You might notice that there are several embedded passwords and secrets in this
answer file. Astute readers will realize that these passwords will all be used
for every virtual machine created with this answer file. For production use,
this is most definitely not secure. However, I consider this relatively safe
since these OpenStack environments are temporary and only for testing.&lt;/p&gt;

&lt;h3 id=&quot;installing-openstack&quot;&gt;Installing OpenStack&lt;/h3&gt;

&lt;p&gt;Next, begin building a &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script. You can re-use the &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script
from Part 1 as a start, with one initial change:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
&lt;span class=&quot;gd&quot;&gt;- packstack --allinone
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+ packstack --answer-file /home/centos/files/packstack-answers.txt
&lt;/span&gt;
  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_admin

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval &quot;$(/usr/local/bin/gimme 1.8)&quot;
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat &amp;gt;&amp;gt; /root/.bashrc &amp;lt;&amp;lt;EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval &quot;\$(/usr/local/bin/gimme 1.8)&quot;
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] &amp;amp;&amp;amp; [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\$1&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, alter &lt;code class=&quot;highlighter-rouge&quot;&gt;packer/openstack/build.json&lt;/code&gt; with the following:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;
  {
    &quot;builders&quot;: [{
      &quot;type&quot;: &quot;openstack&quot;,
      &quot;image_name&quot;: &quot;packstack-ocata&quot;,
      &quot;reuse_ips&quot;: true,
      &quot;ssh_username&quot;: &quot;centos&quot;,

      &quot;flavor&quot;: &quot;{{user `flavor`}}&quot;,
      &quot;security_groups&quot;: [&quot;{{user `secgroup`}}&quot;],
      &quot;source_image&quot;: &quot;{{user `image_id`}}&quot;,
      &quot;floating_ip_pool&quot;: &quot;{{user `pool`}}&quot;,
      &quot;networks&quot;: [&quot;{{user `network_id`}}&quot;]
&lt;span class=&quot;gd&quot;&gt;-   }]
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+   }],
+   &quot;provisioners&quot;: [
+     {
+       &quot;type&quot;: &quot;file&quot;,
+       &quot;source&quot;: &quot;../files&quot;,
+       &quot;destination&quot;: &quot;/home/centos/files&quot;
+     },
+     {
+       &quot;type&quot;: &quot;shell&quot;,
+       &quot;inline&quot;: [
+         &quot;sudo bash /home/centos/files/deploy.sh&quot;
+       ]
+     }
+   ]
&lt;/span&gt;  }

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are two provisioners being created here: one which will copy the &lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt;
directory to &lt;code class=&quot;highlighter-rouge&quot;&gt;/home/centos/files&lt;/code&gt; and one to run the &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt; was created outside of the &lt;code class=&quot;highlighter-rouge&quot;&gt;openstack&lt;/code&gt; directory because these files
are not unique to OpenStack. You can use the same files to build images in
other clouds. For example, create a &lt;code class=&quot;highlighter-rouge&quot;&gt;packer/aws&lt;/code&gt; directory and create a
similar &lt;code class=&quot;highlighter-rouge&quot;&gt;build.json&lt;/code&gt; file for AWS.&lt;/p&gt;

&lt;p&gt;With that in place, run… actually, don't run yet. I'll save you a step.
While the current configuration &lt;em&gt;will&lt;/em&gt; launch an instance, install an all-in-one
OpenStack environment, and create a snapshot, OpenStack will not work correctly
when you create a virtual machine based on that image.&lt;/p&gt;

&lt;p&gt;In order for it to work correctly, there are some more modifications which need
made to make sure OpenStack starts correctly on a new virtual machine.&lt;/p&gt;

&lt;h3 id=&quot;removing-unique-data&quot;&gt;Removing Unique Data&lt;/h3&gt;

&lt;p&gt;In order to remove the unique data of the OpenStack environment, add the
following to &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gi&quot;&gt;+ hostnamectl set-hostname localhost
+
&lt;/span&gt;  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_admin

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval &quot;$(/usr/local/bin/gimme 1.8)&quot;
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat &amp;gt;&amp;gt; /root/.bashrc &amp;lt;&amp;lt;EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval &quot;\$(/usr/local/bin/gimme 1.8)&quot;
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] &amp;amp;&amp;amp; [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\$1&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF
&lt;span class=&quot;gi&quot;&gt;+
+ systemctl stop openstack-cinder-backup.service
+ systemctl stop openstack-cinder-scheduler.service
+ systemctl stop openstack-cinder-volume.service
+ systemctl stop openstack-nova-cert.service
+ systemctl stop openstack-nova-compute.service
+ systemctl stop openstack-nova-conductor.service
+ systemctl stop openstack-nova-consoleauth.service
+ systemctl stop openstack-nova-novncproxy.service
+ systemctl stop openstack-nova-scheduler.service
+ systemctl stop neutron-dhcp-agent.service
+ systemctl stop neutron-l3-agent.service
+ systemctl stop neutron-lbaasv2-agent.service
+ systemctl stop neutron-metadata-agent.service
+ systemctl stop neutron-openvswitch-agent.service
+ systemctl stop neutron-metering-agent.service
+
+ mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; cinder
+ mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; nova
+ mysql -e &quot;update compute_nodes set deleted_at=now(), deleted=id&quot; nova
+ for i in $(openstack network agent list -c ID -f value); do
+   neutron agent-delete $i
+ done
+
+ systemctl stop httpd
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above added 3 pieces to &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt;: set the hostname to &lt;code class=&quot;highlighter-rouge&quot;&gt;localhost&lt;/code&gt;, stop
all OpenStack services, and then delete all known agents for Cinder, Nova, and
Neutron.&lt;/p&gt;

&lt;p&gt;Now, with the above in place, run… no, not yet, either.&lt;/p&gt;

&lt;p&gt;Remember the last step outlined in the beginning of this post:&lt;/p&gt;

&lt;p class=&quot;alert alert-warn&quot;&gt;Upon creation of a new virtual machine, ensure OpenStack knows about the new unique information.&lt;/p&gt;

&lt;p&gt;How is the new virtual machine going to configure itself with new information?
One solution is to create an &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt; file and place it in the &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc&lt;/code&gt;
directory during the Packer provisioning phase. This way, when the virtual
machine launches, &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt; is triggered and acts as a post-boot script.&lt;/p&gt;

&lt;h3 id=&quot;adding-an-rclocal-file&quot;&gt;Adding an rc.local File&lt;/h3&gt;

&lt;p&gt;First, add the following to &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  hostnamectl set-hostname localhost

  systemctl disable firewalld
  systemctl stop firewalld
  systemctl disable NetworkManager
  systemctl stop NetworkManager
  systemctl enable network
  systemctl start network

  yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
  yum install -y centos-release-openstack-ocata
  yum-config-manager --enable openstack-ocata
  yum update -y
  yum install -y openstack-packstack
  packstack --answer-file /home/centos/files/packstack-answers.txt

  source /root/keystonerc_admin
  nova flavor-create m1.acctest 99 512 5 1 --ephemeral 10
  nova flavor-create m1.resize 98 512 6 1 --ephemeral 10
  _NETWORK_ID=$(openstack network show private -c id -f value)
  _SUBNET_ID=$(openstack subnet show private_subnet -c id -f value)
  _EXTGW_ID=$(openstack network show public -c id -f value)
  _IMAGE_ID=$(openstack image show cirros -c id -f value)

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_admin
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_admin

  echo &quot;&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_NAME=&quot;cirros&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_IMAGE_ID=&quot;$_IMAGE_ID&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_NETWORK_ID=$_NETWORK_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_EXTGW_ID=$_EXTGW_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_POOL_NAME=&quot;public&quot; &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID=99 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_FLAVOR_ID_RESIZE=98 &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_DOMAIN_NAME=default &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_NAME=\$OS_PROJECT_NAME &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_TENANT_ID=\$OS_PROJECT_ID &amp;gt;&amp;gt; /root/keystonerc_demo
  echo export OS_SHARE_NETWORK_ID=&quot;foobar&quot; &amp;gt;&amp;gt; /root/keystonerc_demo

  yum install -y wget git
  wget -O /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
  chmod +x /usr/local/bin/gimme
  eval &quot;$(/usr/local/bin/gimme 1.8)&quot;
  export GOPATH=$HOME/go
  export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

  go get github.com/gophercloud/gophercloud
  pushd ~/go/src/github.com/gophercloud/gophercloud
  go get -u ./...
  popd

  cat &amp;gt;&amp;gt; /root/.bashrc &amp;lt;&amp;lt;EOF
  if [[ -f /usr/local/bin/gimme ]]; then
    eval &quot;\$(/usr/local/bin/gimme 1.8)&quot;
    export GOPATH=\$HOME/go
    export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin
  fi

  gophercloudtest() {
    if [[ -n \$1 ]] &amp;amp;&amp;amp; [[ -n \$2 ]]; then
      pushd  ~/go/src/github.com/gophercloud/gophercloud
      go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\$1&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\$2 | tee ~/gophercloud.log
      popd
    fi
  }
  EOF

  systemctl stop openstack-cinder-backup.service
  systemctl stop openstack-cinder-scheduler.service
  systemctl stop openstack-cinder-volume.service
  systemctl stop openstack-nova-cert.service
  systemctl stop openstack-nova-compute.service
  systemctl stop openstack-nova-conductor.service
  systemctl stop openstack-nova-consoleauth.service
  systemctl stop openstack-nova-novncproxy.service
  systemctl stop openstack-nova-scheduler.service
  systemctl stop neutron-dhcp-agent.service
  systemctl stop neutron-l3-agent.service
  systemctl stop neutron-lbaasv2-agent.service
  systemctl stop neutron-metadata-agent.service
  systemctl stop neutron-openvswitch-agent.service
  systemctl stop neutron-metering-agent.service

  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; cinder
  mysql -e &quot;update services set deleted_at=now(), deleted=id&quot; nova
  mysql -e &quot;update compute_nodes set deleted_at=now(), deleted=id&quot; nova
  for i in $(openstack network agent list -c ID -f value); do
    neutron agent-delete $i
  done

  systemctl stop httpd

&lt;span class=&quot;gi&quot;&gt;+ cp /home/centos/files/rc.local /etc
+ chmod +x /etc/rc.local
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create a file called &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt; inside the &lt;code class=&quot;highlighter-rouge&quot;&gt;packstack/files&lt;/code&gt; directory:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;set&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-x&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;HOME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;/root

sleep 60

systemctl restart rabbitmq-server
&lt;span class=&quot;k&quot;&gt;while&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;true&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do
  &lt;/span&gt;pgrep &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; rabbit
  &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$?&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; 0 &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;break
  &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;fi
  &lt;/span&gt;sleep 10
  systemctl restart rabbitmq-server
&lt;span class=&quot;k&quot;&gt;done

&lt;/span&gt;nova-manage cell_v2 discover_hosts
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above is pretty simple: it's simply restarting RabbitMQ and running
&lt;code class=&quot;highlighter-rouge&quot;&gt;nova-manage&lt;/code&gt; to re-discover itself as a compute node.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Why restart RabbitMQ? I have no idea. I've found it needs to be done
for OpenStack to work correctly.&lt;/p&gt;

&lt;p&gt;I also mentioned I'll show how to to access the OpenStack services from
&lt;em&gt;outside&lt;/em&gt; the virtual machine, so you don't have to log in to the virtual
machine to run tests.&lt;/p&gt;

&lt;p&gt;To do that, add the following to &lt;code class=&quot;highlighter-rouge&quot;&gt;rc.local&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  #!/bin/bash
  set -x

  export HOME=/root

  sleep 60

&lt;span class=&quot;gi&quot;&gt;+ public_ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4/)
+ if [[ -n $public_ip ]]; then
+   while true ; do
+     mysql -e &quot;update endpoint set url = replace(url, '127.0.0.1', '$public_ip')&quot; keystone
+     if [[ $? == 0 ]]; then
+       break
+     fi
+     sleep 10
+   done
&lt;/span&gt;
&lt;span class=&quot;gi&quot;&gt;+   sed -i -e &quot;s/127.0.0.1/$public_ip/g&quot; /root/keystonerc_demo
+   sed -i -e &quot;s/127.0.0.1/$public_ip/g&quot; /root/keystonerc_admin
+ fi
&lt;/span&gt;
  systemctl restart rabbitmq-server
  while [[ true ]]; do
    pgrep -f rabbit
    if [[ $? == 0 ]]; then
      break
    fi
    sleep 10
    systemctl restart rabbitmq-server
  done

&lt;span class=&quot;gi&quot;&gt;+ systemctl restart openstack-cinder-api.service
+ systemctl restart openstack-cinder-backup.service
+ systemctl restart openstack-cinder-scheduler.service
+ systemctl restart openstack-cinder-volume.service
+ systemctl restart openstack-nova-cert.service
+ systemctl restart openstack-nova-compute.service
+ systemctl restart openstack-nova-conductor.service
+ systemctl restart openstack-nova-consoleauth.service
+ systemctl restart openstack-nova-novncproxy.service
+ systemctl restart openstack-nova-scheduler.service
+ systemctl restart neutron-dhcp-agent.service
+ systemctl restart neutron-l3-agent.service
+ systemctl restart neutron-lbaasv2-agent.service
+ systemctl restart neutron-metadata-agent.service
+ systemctl restart neutron-openvswitch-agent.service
+ systemctl restart neutron-metering-agent.service
+ systemctl restart httpd
&lt;/span&gt;
  nova-manage cell_v2 discover_hosts

&lt;span class=&quot;gi&quot;&gt;+ iptables -I INPUT -p tcp --dport 80 -j ACCEPT
+ ip6tables -I INPUT -p tcp --dport 80 -j ACCEPT
+ cp /root/keystonerc* /var/www/html
+ chmod 666 /var/www/html/keystonerc*
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Three steps have been added above:&lt;/p&gt;

&lt;p&gt;The first uses cloud-init to discover the virtual machine's public IP. Once
the public IP is known, the &lt;code class=&quot;highlighter-rouge&quot;&gt;endpoint&lt;/code&gt; table in the &lt;code class=&quot;highlighter-rouge&quot;&gt;keystone&lt;/code&gt; database is
updated with it. By default, PackStack sets the endpoints of the Keystone
catalog to &lt;code class=&quot;highlighter-rouge&quot;&gt;127.0.0.1&lt;/code&gt;. This prevents outside interaction of OpenStack.
Changing it to the public IP resolves this issue.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;keystonerc_demo&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;keystonerc_admin&lt;/code&gt; files are also updated with the
public IP.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Why not just set the public IP in the PackStack answer file? Because the
public IP will not be known until the virtual machine launches, which is
&lt;em&gt;after&lt;/em&gt; PackStack has run. And that's why &lt;code class=&quot;highlighter-rouge&quot;&gt;127.0.0.1&lt;/code&gt; was used earlier:
it's an easy placeholder to search and replace with &lt;em&gt;and&lt;/em&gt; it will still
create a working OpenStack environment if it wasn't replaced.&lt;/p&gt;

&lt;p&gt;The second stop restarts all OpenStack services so they're aware of the
new endpoints.&lt;/p&gt;

&lt;p&gt;The third step copies the &lt;code class=&quot;highlighter-rouge&quot;&gt;keystonerc_demo&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;keystonerc_admin&lt;/code&gt; files to
&lt;code class=&quot;highlighter-rouge&quot;&gt;/var/www/html/&lt;/code&gt;. This way, you can &lt;code class=&quot;highlighter-rouge&quot;&gt;wget&lt;/code&gt; the files from
http://public-ip/keystonerc_demo and http://public-ip/keystonerc_admin
and save them to your workstation. You can then source them and begin
interacting with OpenStack remotely.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Now&lt;/em&gt;, with all of that in place, re-run Packer:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test/packer/openstack
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;packer build &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'flavor=m1.large'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'secgroup=AllowAll'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'image_id=9abadd38-a33d-44c2-8356-b8b8ae184e04'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'pool=public'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'network_id=b0b12e8f-a695-480e-9dc2-3dc8ac2d55fd'&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    build.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;using-the-image&quot;&gt;Using the Image&lt;/h2&gt;

&lt;p&gt;When the build is complete, you will have a new image called &lt;code class=&quot;highlighter-rouge&quot;&gt;packstack-ocata&lt;/code&gt;
that you can create a virtual machine with.&lt;/p&gt;

&lt;p&gt;As an example, you can use Terraform to launch the image:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;variable &quot;key_name&quot; {}
variable &quot;network_id&quot; {}

variable &quot;pool&quot; {
  default = &quot;public&quot;
}

variable &quot;flavor&quot; {
  default = &quot;m1.xlarge&quot;
}

data &quot;openstack_images_image_v2&quot; &quot;packstack&quot; {
  name        = &quot;packstack-ocata&quot;
  most_recent = true
}

resource &quot;random_id&quot; &quot;security_group_name&quot; {
  prefix      = &quot;openstack_test_instance_allow_all_&quot;
  byte_length = 8
}

resource &quot;openstack_networking_floatingip_v2&quot; &quot;openstack_acc_tests&quot; {
  pool = &quot;${var.pool}&quot;
}

resource &quot;openstack_networking_secgroup_v2&quot; &quot;openstack_acc_tests&quot; {
  name        = &quot;${random_id.security_group_name.hex}&quot;
  description = &quot;Rules for openstack acceptance tests&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_1&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv4&quot;
  protocol          = &quot;tcp&quot;
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = &quot;0.0.0.0/0&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_2&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv6&quot;
  protocol          = &quot;tcp&quot;
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = &quot;::/0&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_3&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv4&quot;
  protocol          = &quot;udp&quot;
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = &quot;0.0.0.0/0&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_4&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv6&quot;
  protocol          = &quot;udp&quot;
  port_range_min    = 1
  port_range_max    = 65535
  remote_ip_prefix  = &quot;::/0&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_5&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv4&quot;
  protocol          = &quot;icmp&quot;
  remote_ip_prefix  = &quot;0.0.0.0/0&quot;
}

resource &quot;openstack_networking_secgroup_rule_v2&quot; &quot;openstack_acc_tests_rule_6&quot; {
  security_group_id = &quot;${openstack_networking_secgroup_v2.openstack_acc_tests.id}&quot;
  direction         = &quot;ingress&quot;
  ethertype         = &quot;IPv6&quot;
  protocol          = &quot;icmp&quot;
  remote_ip_prefix  = &quot;::/0&quot;
}

resource &quot;openstack_compute_instance_v2&quot; &quot;openstack_acc_tests&quot; {
  name            = &quot;openstack_acc_tests&quot;
  image_id        = &quot;${data.openstack_images_image_v2.packstack.id}&quot;
  flavor_name     = &quot;${var.flavor}&quot;
  key_pair        = &quot;${var.key_name}&quot;
  security_groups = [&quot;${openstack_networking_secgroup_v2.openstack_acc_tests.name}&quot;]

  network {
    uuid = &quot;${var.network_id}&quot;
  }
}

resource &quot;openstack_compute_floatingip_associate_v2&quot; &quot;openstack_acc_tests&quot; {
  instance_id = &quot;${openstack_compute_instance_v2.openstack_acc_tests.id}&quot;
  floating_ip = &quot;${openstack_networking_floatingip_v2.openstack_acc_tests.address}&quot;
}

resource &quot;null_resource&quot; &quot;rc_files&quot; {
  provisioner &quot;local-exec&quot; {
    command = &amp;lt;&amp;lt;EOF
      while true ; do
        wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_demo 2&amp;gt; /dev/null
        if [ $? = 0 ]; then
          break
        fi
        sleep 20
      done

      wget http://${openstack_compute_floatingip_associate_v2.openstack_acc_tests.floating_ip}/keystonerc_admin
    EOF
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above Terraform configuration will do the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Search for the latest image titled &quot;openstack-ocata&quot;.&lt;/li&gt;
  &lt;li&gt;Create a floating IP.&lt;/li&gt;
  &lt;li&gt;Create a security group with a unique name and six rules to allow all TCP,
UDP, and ICMP traffic.&lt;/li&gt;
  &lt;li&gt;Launch an instance using the &quot;openstack-ocata&quot; image.&lt;/li&gt;
  &lt;li&gt;Associate the floating IP to the instance.&lt;/li&gt;
  &lt;li&gt;Poll the instance every 20 seconds to see if http://publicip/keystonerc_demo
is available. When it is available, download it, along with &lt;code class=&quot;highlighter-rouge&quot;&gt;keystonerc_admin&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To run this Terraform configuration, do:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;key_name=&amp;lt;keypair name&amp;gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;network_id=&amp;lt;network uuid&amp;gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;pool=&amp;lt;pool name&amp;gt;&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
    &lt;span class=&quot;nt&quot;&gt;-var&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;flavor=&amp;lt;flavor name&amp;gt;&quot;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This blog post detailed how to create a reusable image with OpenStack Ocata
already installed. This allows you to create a standard testing environment in a
fraction of the time that it takes to build the environment from scratch.&lt;/p&gt;
</description>
				<pubDate>Sat, 07 Oct 2017 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/building-openstack-environments-2.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/building-openstack-environments-2.html</guid>
			</item>
		
			<item>
				<title>Building OpenStack Environments</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;I work a number of OpenStack-based projects. In order to make sure they work
correctly, I need to test them against an OpenStack environment. It's usually
not a good idea to test on a production environment since mistakes and bugs
can cause damage to production.&lt;/p&gt;

&lt;p&gt;So the next best option is to create an OpenStack environment strictly for
testing purposes. This blog post will describe how to create such an
environment.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#where-to-run-openstack&quot; id=&quot;markdown-toc-where-to-run-openstack&quot;&gt;Where to Run OpenStack&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#virtualbox&quot; id=&quot;markdown-toc-virtualbox&quot;&gt;VirtualBox&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#aws&quot; id=&quot;markdown-toc-aws&quot;&gt;AWS&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#other-spot-pricing-clouds&quot; id=&quot;markdown-toc-other-spot-pricing-clouds&quot;&gt;Other Spot Pricing Clouds&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#your-own-cloud&quot; id=&quot;markdown-toc-your-own-cloud&quot;&gt;Your Own Cloud&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#provisioning-a-virtual-machine&quot; id=&quot;markdown-toc-provisioning-a-virtual-machine&quot;&gt;Provisioning a Virtual Machine&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#terraform&quot; id=&quot;markdown-toc-terraform&quot;&gt;Terraform&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#deploying-to-aws&quot; id=&quot;markdown-toc-deploying-to-aws&quot;&gt;Deploying to AWS&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#installing-openstack&quot; id=&quot;markdown-toc-installing-openstack&quot;&gt;Installing OpenStack&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#devstack&quot; id=&quot;markdown-toc-devstack&quot;&gt;DevStack&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#packstack&quot; id=&quot;markdown-toc-packstack&quot;&gt;PackStack&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#installing-openstack-with-packstack&quot; id=&quot;markdown-toc-installing-openstack-with-packstack&quot;&gt;Installing OpenStack with PackStack&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#testing-with-openstack&quot; id=&quot;markdown-toc-testing-with-openstack&quot;&gt;Testing with OpenStack&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#automating-the-process&quot; id=&quot;markdown-toc-automating-the-process&quot;&gt;Automating the Process&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;where-to-run-openstack&quot;&gt;Where to Run OpenStack&lt;/h2&gt;

&lt;p&gt;The first topic of consideration is &lt;em&gt;where&lt;/em&gt; to run OpenStack.&lt;/p&gt;

&lt;h3 id=&quot;virtualbox&quot;&gt;VirtualBox&lt;/h3&gt;

&lt;p&gt;At a minimum, you can install VirtualBox on your workstation and create a
virtual machine. This is quick, easy, and free. However, you're limited to the
resources of your workstation. For example, if your laptop only has 4GB of
memory and two cores, OpenStack is going to run slow.&lt;/p&gt;

&lt;h3 id=&quot;aws&quot;&gt;AWS&lt;/h3&gt;

&lt;p&gt;Another option is to use AWS. While AWS offers a free tier, it's restricted
(I think) to the &lt;code class=&quot;highlighter-rouge&quot;&gt;t2.micro&lt;/code&gt; flavor. This flavor only supports 1 vCPU, which is
is usually worse than your laptop. Larger instances will cost anywhere from
$0.25 to $5.00 (and up!) per hour to run. It can get expensive.&lt;/p&gt;

&lt;p&gt;However, AWS offers &quot;spot-instances&quot;. These are virtual machines that cost a
fraction of normal virtual machines. This is possible because spot instances
run on spare, unused capacity in Amazon's cloud. The catch is that your virtual
machine could be deleted when a higher paying customer wants to use the space.
You certainly don't want to do this for production (well, you &lt;em&gt;can&lt;/em&gt;, and that's
a fun exercise on its own), but for testing, it's perfect.&lt;/p&gt;

&lt;p&gt;With Spot Instances, you can run an &lt;code class=&quot;highlighter-rouge&quot;&gt;m3.xlarge&lt;/code&gt; flavor, which consists of 4
vCPUs and 16GB of memory, for $0.05 per hour. An afternoon of work will cost you
$0.20. Well worth the cost of 4 vCPUs and 16GB of memory, in my opinion.&lt;/p&gt;

&lt;p class=&quot;alert alert-warn&quot;&gt;&lt;a href=&quot;https://aws.amazon.com/ec2/spot/pricing/&quot;&gt;Spot Pricing&lt;/a&gt; is constantly changing.
Make sure you check the current price before you begin working. And make sure
you do not leave your virtual machine running indefinitely!&lt;/p&gt;

&lt;h4 id=&quot;other-spot-pricing-clouds&quot;&gt;Other Spot Pricing Clouds&lt;/h4&gt;

&lt;p&gt;Both Google and Azure offer spot instances, however, I have not had time to try
them, so I can't comment.&lt;/p&gt;

&lt;h3 id=&quot;your-own-cloud&quot;&gt;Your Own Cloud&lt;/h3&gt;

&lt;p&gt;The best resource is your own cloud. Maybe you already have a home lab set up
or your place of &lt;code class=&quot;highlighter-rouge&quot;&gt;$work&lt;/code&gt; has a cloud you can use. This way, you can have a large
amount of resources available to use for free.&lt;/p&gt;

&lt;h2 id=&quot;provisioning-a-virtual-machine&quot;&gt;Provisioning a Virtual Machine&lt;/h2&gt;

&lt;p&gt;Once you have your location sorted out, you need to decide how to interact with
the cloud to provision a virtual machine.&lt;/p&gt;

&lt;p&gt;At a minimum, you can use the standard GUI or console that the cloud provides.
This works, but it's a hassle to have to manually go through all settings each
time you want to launch a new virtual machine. It's always best to test with
a clean environment, so you will be creating and destroying virtual machines
a lot. Manually setting up virtual machines will get tedious and is prone to
human error. Therefore, it's better to use a tool to automate the process.&lt;/p&gt;

&lt;h3 id=&quot;terraform&quot;&gt;Terraform&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://www.terraform.io/&quot;&gt;Terraform&lt;/a&gt; is a tool that enables you to
declaratively create infrastructure. Think of it like Puppet or Chef, but for
virtual machines and virtual networks instead of files and packages.&lt;/p&gt;

&lt;p&gt;I highly recommend Terraform for this, though I admit I am biased because I
spend a lot of time contributing to the Terraform project.&lt;/p&gt;

&lt;h4 id=&quot;deploying-to-aws&quot;&gt;Deploying to AWS&lt;/h4&gt;

&lt;p&gt;As a reference example, I'll show how to use Terraform to deploy to AWS. Before
you begin, make sure you have a valid AWS account and you have gone through the
&lt;a href=&quot;https://www.terraform.io/intro/index.html&quot;&gt;Terraform intro&lt;/a&gt;.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;There's some irony about using AWS to deploy OpenStack. However, some
readers might not have access to an OpenStack cloud to deploy to. Please
don't turn this into a political discussion.&lt;/p&gt;

&lt;p&gt;On your workstation, open a terminal and make a directory:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;terraform-openstack-test
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, generate an SSH key pair:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir key
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;key
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;ssh-keygen &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; rsa &lt;span class=&quot;nt&quot;&gt;-N&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;''&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; id_rsa
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ..
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create a &lt;code class=&quot;highlighter-rouge&quot;&gt;main.tf&lt;/code&gt; file which will house our configuration:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;vi main.tf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Start by creating a &lt;a href=&quot;https://www.terraform.io/docs/providers/aws/r/key_pair.html&quot;&gt;key pair&lt;/a&gt;:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;provider &quot;aws&quot; {
  region = &quot;us-west-2&quot;
}

resource &quot;aws_key_pair&quot; &quot;openstack&quot; {
  key_name   = &quot;openstack-key&quot;
  public_key = &quot;${file(&quot;key/id_rsa.pub&quot;)}&quot;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;With that in place, run:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform init
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, create a &lt;a href=&quot;https://www.terraform.io/docs/providers/aws/r/security_group.html&quot;&gt;Security Group&lt;/a&gt;.
This will allow traffic in and out of the virtual machine. Add the following
to &lt;code class=&quot;highlighter-rouge&quot;&gt;main.tf&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  provider &quot;aws&quot; {
    region = &quot;us-west-2&quot;
  }

  resource &quot;aws_key_pair&quot; &quot;openstack&quot; {
    key_name   = &quot;openstack-key&quot;
    public_key = &quot;${file(&quot;key/id_rsa.pub&quot;)}&quot;
  }

&lt;span class=&quot;gi&quot;&gt;+ resource &quot;aws_security_group&quot; &quot;openstack&quot; {
+   name        = &quot;openstack&quot;
+   description = &quot;Allow all inbound/outbound traffic&quot;
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = &quot;tcp&quot;
+     cidr_blocks = [&quot;0.0.0.0/0&quot;]
+   }
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = &quot;udp&quot;
+     cidr_blocks = [&quot;0.0.0.0/0&quot;]
+   }
+
+   ingress {
+     from_port   = 0
+     to_port     = 0
+     protocol    = &quot;icmp&quot;
+     cidr_blocks = [&quot;0.0.0.0/0&quot;]
+   }
+ }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Note: Don't include the &lt;code class=&quot;highlighter-rouge&quot;&gt;+&lt;/code&gt;. It's used to highlight what has been added to the
configuration.&lt;/p&gt;

&lt;p&gt;With that in place, run:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform plan
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;If you log in to your AWS console through a browser, you can see that the key
pair and security group have been added to your account.&lt;/p&gt;

&lt;p&gt;You can easily destroy and recreate these resources at-will:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform plan
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform destroy
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform plan
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform show
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, create a virtual machine. Add the following to &lt;code class=&quot;highlighter-rouge&quot;&gt;main.tf&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  provider &quot;aws&quot; {
    region = &quot;us-west-2&quot;
  }

  resource &quot;aws_key_pair&quot; &quot;openstack&quot; {
    key_name   = &quot;openstack-key&quot;
    public_key = &quot;${file(&quot;key/id_rsa.pub&quot;)}&quot;
  }

  resource &quot;aws_security_group&quot; &quot;openstack&quot; {
    name        = &quot;openstack&quot;
    description = &quot;Allow all inbound/outbound traffic&quot;

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;tcp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;udp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;icmp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

  }

&lt;span class=&quot;gi&quot;&gt;+ resource &quot;aws_spot_instance_request&quot; &quot;openstack&quot; {
+   ami = &quot;ami-0c2aba6c&quot;
+   spot_price = &quot;0.0440&quot;
+   instance_type = &quot;m3.xlarge&quot;
+   wait_for_fulfillment = true
+   spot_type = &quot;one-time&quot;
+   key_name = &quot;${aws_key_pair.openstack.key_name}&quot;
+
+   security_groups = [&quot;default&quot;, &quot;${aws_security_group.openstack.name}&quot;]
+
+   root_block_device {
+     volume_size = 40
+     delete_on_termination = true
+   }
+
+   tags {
+     Name = &quot;OpenStack Test Infra&quot;
+   }
+ }
+
+ output &quot;ip&quot; {
+   value = &quot;${aws_spot_instance_request.openstack.public_ip}&quot;
+ }
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Above, an &lt;a href=&quot;https://www.terraform.io/docs/providers/aws/r/spot_instance_request.html&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;aws_sport_instance_request&lt;/code&gt;&lt;/a&gt;
resource was added. This will launch a Spot Instance using the parameters we
specified.&lt;/p&gt;

&lt;p&gt;It's important to note that the &lt;code class=&quot;highlighter-rouge&quot;&gt;aws_spot_instance_request&lt;/code&gt; resource also takes
the same parameters as the &lt;a href=&quot;https://www.terraform.io/docs/providers/aws/r/instance.html&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;aws_instance&lt;/code&gt;&lt;/a&gt;
resource.&lt;/p&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;ami&lt;/code&gt; being used is the latest CentOS 7 AMI published in the &lt;code class=&quot;highlighter-rouge&quot;&gt;us-west-2&lt;/code&gt;
region. You can see the list of AMIs &lt;a href=&quot;https://wiki.centos.org/Cloud/AWS&quot;&gt;here&lt;/a&gt;.
Make sure you use the correct AMI for the region you're deploying to.&lt;/p&gt;

&lt;p&gt;Notice how this resource is referencing the other resources you created (the key
pair, and the security group). Additionally, notice how you're specifying a
&lt;code class=&quot;highlighter-rouge&quot;&gt;spot_price&lt;/code&gt;. This is helpful to limit the amount of money that will be spent
on this instance. You can get an accurate price by going to the
&lt;a href=&quot;https://us-west-2.console.aws.amazon.com/ec2sp/v1/spot/home?region=us-west-2&quot;&gt;Spot Request&lt;/a&gt;
page and clicking on &quot;Pricing History&quot;. Again, make sure you are looking at the
correct region.&lt;/p&gt;

&lt;p&gt;An &lt;code class=&quot;highlighter-rouge&quot;&gt;output&lt;/code&gt; was also added to the &lt;code class=&quot;highlighter-rouge&quot;&gt;main.tf&lt;/code&gt; file. This will print out the
public IP address of the AWS instance when Terraform completes.&lt;/p&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Amazon limits the amount of spot instances you can launch at any given time.
You might find that if you create, delete, and recreate a spot instance too
quickly, Terraform will give you an error. This is Amazon telling you to wait.
You can open a support ticket with Amazon/AWS and ask for a larger spot quota to
be placed on your account. I asked for the ability to launch 5 spot instances at
any given time in the &lt;code class=&quot;highlighter-rouge&quot;&gt;us-west-1&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;us-west-2&lt;/code&gt; region. This took around two
business days to complete.&lt;/p&gt;

&lt;p&gt;With all of this in place, run Terraform:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When it has completed, you should see output similar to the following:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;Outputs:

ip = 54.71.64.171
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You should now be able to SSH to the instance:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;ssh &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; key/id_rsa centos@54.71.64.171
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;And there you have it! You now have access to a CentOS virtual machine to
continue testing OpenStack with.&lt;/p&gt;

&lt;h2 id=&quot;installing-openstack&quot;&gt;Installing OpenStack&lt;/h2&gt;

&lt;p&gt;There are numerous ways to install OpenStack. Given that the purpose of this
setup is to create an easy-to-deploy OpenStack environment for testing, let's
narrow our choices down to methods that can provide a simple all-in-one setup.&lt;/p&gt;

&lt;h3 id=&quot;devstack&quot;&gt;DevStack&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://docs.openstack.org/devstack/latest/&quot;&gt;DevStack&lt;/a&gt; provides an easy way of
creating an all-in-one environment for testing. It's mainly used to test the
latest OpenStack source code. Because of that, it can buggy. I've found that
even when using DevStack to deploy a stable version of OpenStack, there were
times when DevStack failed to complete. Given that it takes approximately two
hours for DevStack to install, having the installation fail has just wasted two
hours of time.&lt;/p&gt;

&lt;p&gt;Additionally, DevStack isn't suitable to run on a virtual machine which might
reboot. When testing an application that uses OpenStack, it's possible that the
application causes the virtual machine to be overloaded and lock up.&lt;/p&gt;

&lt;p&gt;So for these reasons, I won't use DevStack here. That's not to say that DevStack
isn't a suitable tool – after all, it's used as the core of all OpenStack
testing.&lt;/p&gt;

&lt;h3 id=&quot;packstack&quot;&gt;PackStack&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://www.rdoproject.org/install/packstack/&quot;&gt;PackStack&lt;/a&gt; is also able to
easily install an all-in-one OpenStack environment. Rather than building
OpenStack from source, it leverages &lt;a href=&quot;https://www.rdoproject.org/&quot;&gt;RDO&lt;/a&gt; packages
and Puppet.&lt;/p&gt;

&lt;p&gt;PackStack is also beneficial because it will install the latest stable release
of OpenStack. If you are developing an application that end-users will use,
these users will most likely be using an OpenStack cloud based on a stable
release.&lt;/p&gt;

&lt;h4 id=&quot;installing-openstack-with-packstack&quot;&gt;Installing OpenStack with PackStack&lt;/h4&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.rdoproject.org/install/packstack/&quot;&gt;PackStack&lt;/a&gt; home page has all
necessary instructions to get a simple environment up and running. Here are all
of the steps compressed into a shell script:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl &lt;span class=&quot;nb&quot;&gt;enable &lt;/span&gt;network
systemctl start network

yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; centos-release-openstack-ocata
yum-config-manager &lt;span class=&quot;nt&quot;&gt;--enable&lt;/span&gt; openstack-ocata
yum update &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt;
yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; openstack-packstack
packstack &lt;span class=&quot;nt&quot;&gt;--allinone&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;OpenStack Pike is available at the time of this writing, however, I have not
had a chance to test and verify the instructions work. Therefore, I'll be using
Ocata.&lt;/p&gt;

&lt;p&gt;Save the file as something like &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; and then run it in your virtual
machine:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;bash deploy.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p class=&quot;alert alert-info&quot;&gt;Consider using a tool like &lt;code class=&quot;highlighter-rouge&quot;&gt;tmux&lt;/code&gt; or &lt;code class=&quot;highlighter-rouge&quot;&gt;screen&lt;/code&gt; after logging into your remote
virtual machine. This will ensure the &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script continues to run, even
if your connection to the virtual machine was terminated.&lt;/p&gt;

&lt;p&gt;The process will take approximately 30 minutes to complete.&lt;/p&gt;

&lt;p&gt;When it's finished, you'll now have a usable all-in-one environment:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;su
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /root
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;source &lt;/span&gt;keystonerc_demo
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openstack network list
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openstack image list
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openstack server create &lt;span class=&quot;nt&quot;&gt;--flavor&lt;/span&gt; 1 &lt;span class=&quot;nt&quot;&gt;--image&lt;/span&gt; cirros &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;testing-with-openstack&quot;&gt;Testing with OpenStack&lt;/h2&gt;

&lt;p&gt;Now that OpenStack is up and running, you can begin testing with it.&lt;/p&gt;

&lt;p&gt;Let's say you want to add a new feature to &lt;a href=&quot;https://github.com/gophercloud/gophercloud&quot;&gt;Gophercloud&lt;/a&gt;.
First, you need to install Go:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; wget
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;chmod +x /usr/local/bin/gimme
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;eval&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;/usr/local/bin/gimme 1.8&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;GOPATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/go
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PATH&lt;/span&gt;:&lt;span class=&quot;nv&quot;&gt;$GOROOT&lt;/span&gt;/bin:&lt;span class=&quot;nv&quot;&gt;$GOPATH&lt;/span&gt;/bin
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;To make those commands permanent, add the following to your &lt;code class=&quot;highlighter-rouge&quot;&gt;.bashrc&lt;/code&gt; file:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; /usr/local/bin/gimme &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
  &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;eval&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;/usr/local/bin/gimme 1.8&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
  &lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;GOPATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/go
  &lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PATH&lt;/span&gt;:&lt;span class=&quot;nv&quot;&gt;$GOROOT&lt;/span&gt;/bin:&lt;span class=&quot;nv&quot;&gt;$GOPATH&lt;/span&gt;/bin
&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, install Gophercloud:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;go get github.com/gophercloud/gophercloud
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/go/src/github.com/gophercloud/gophercloud
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;go get &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; ./...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In order to run Gophercloud acceptance tests, you need to have several
environment variables set. These are described &lt;a href=&quot;https://github.com/gophercloud/gophercloud/tree/master/acceptance&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It would be tedious to set each variable for each test or each time you log
in to the virtual machine. Therefore, embed the variables into the
&lt;code class=&quot;highlighter-rouge&quot;&gt;/root/keystonerc_demo&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;/root/keystonerc_admin&lt;/code&gt; files:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;source&lt;/span&gt; /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 &lt;span class=&quot;nt&quot;&gt;--ephemeral&lt;/span&gt; 10
nova flavor-create m1.resize 98 512 6 1 &lt;span class=&quot;nt&quot;&gt;--ephemeral&lt;/span&gt; 10
&lt;span class=&quot;nv&quot;&gt;_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack network show private &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_SUBNET_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack subnet show private_subnet &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack network show public &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack image show cirros &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;cirros&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_NETWORK_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_EXTGW_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_POOL_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;public&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;99 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID_RESIZE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;98 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_DOMAIN_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;default &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_NAME &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_ID &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_SHARE_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;foobar&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;cirros&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_NETWORK_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_EXTGW_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_POOL_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;public&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;99 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID_RESIZE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;98 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_DOMAIN_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;default &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_NAME &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_ID &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_SHARE_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;foobar&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Now try to run a test:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;source&lt;/span&gt; ~/keystonerc_demo
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ~/go/src/github.com/gophercloud/gophercloud
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;go &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-tags&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;fixtures acceptance&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-run&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;TestServersCreateDestroy&quot;&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
  github.com/gophercloud/gophercloud/acceptance/openstack/compute/v2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;That &lt;code class=&quot;highlighter-rouge&quot;&gt;go&lt;/code&gt; command is long and tedious. A shortcut would be more helpful. Add the
following to &lt;code class=&quot;highlighter-rouge&quot;&gt;~/.bashrc&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;gophercloudtest&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[[&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$2&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;]]&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;then
    &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pushd&lt;/span&gt;  ~/go/src/github.com/gophercloud/gophercloud
    go &lt;span class=&quot;nb&quot;&gt;test&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-tags&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;fixtures acceptance&quot;&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-run&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; github.com/gophercloud/gophercloud/acceptance/openstack/&lt;span class=&quot;nv&quot;&gt;$2&lt;/span&gt; | tee ~/gophercloud.log
    &lt;span class=&quot;nb&quot;&gt;popd
  &lt;/span&gt;&lt;span class=&quot;k&quot;&gt;fi&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You can now run tests by doing:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;source&lt;/span&gt; ~/.bashrc
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;gophercloudtest TestServersCreateDestroy compute/v2
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;automating-the-process&quot;&gt;Automating the Process&lt;/h2&gt;

&lt;p&gt;There's been a lot of work done since first logging into the virtual machine and
it would be a hassle to have to do it all over again. It would be better if the
entire process was automated, from start to finish.&lt;/p&gt;

&lt;p&gt;First, create a new directory on your workstation:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir files
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;files
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;vi deploy.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;In the &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script, add the following contents:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;#!/bin/bash&lt;/span&gt;

systemctl disable firewalld
systemctl stop firewalld
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl &lt;span class=&quot;nb&quot;&gt;enable &lt;/span&gt;network
systemctl start network

yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; https://repos.fedorapeople.org/repos/openstack/openstack-ocata/rdo-release-ocata-3.noarch.rpm
yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; centos-release-openstack-ocata
yum-config-manager &lt;span class=&quot;nt&quot;&gt;--enable&lt;/span&gt; openstack-ocata
yum update &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt;
yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; openstack-packstack
packstack &lt;span class=&quot;nt&quot;&gt;--allinone&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;source&lt;/span&gt; /root/keystonerc_admin
nova flavor-create m1.acctest 99 512 5 1 &lt;span class=&quot;nt&quot;&gt;--ephemeral&lt;/span&gt; 10
nova flavor-create m1.resize 98 512 6 1 &lt;span class=&quot;nt&quot;&gt;--ephemeral&lt;/span&gt; 10
&lt;span class=&quot;nv&quot;&gt;_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack network show private &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_SUBNET_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack subnet show private_subnet &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack network show public &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;openstack image show cirros &lt;span class=&quot;nt&quot;&gt;-c&lt;/span&gt; id &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; value&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;cirros&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_NETWORK_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_EXTGW_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_POOL_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;public&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;99 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID_RESIZE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;98 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_DOMAIN_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;default &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_NAME &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_ID &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_SHARE_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;foobar&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_admin

&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;cirros&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_IMAGE_ID&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_NETWORK_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_EXTGW_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$_EXTGW_ID&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_POOL_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;public&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;99 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_FLAVOR_ID_RESIZE&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;98 &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_DOMAIN_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;default &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_NAME&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_NAME &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_TENANT_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\$&lt;/span&gt;OS_PROJECT_ID &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo
&lt;span class=&quot;nb&quot;&gt;echo export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;OS_SHARE_NETWORK_ID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;foobar&quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/keystonerc_demo

yum install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; wget
wget &lt;span class=&quot;nt&quot;&gt;-O&lt;/span&gt; /usr/local/bin/gimme https://raw.githubusercontent.com/travis-ci/gimme/master/gimme
chmod +x /usr/local/bin/gimme
&lt;span class=&quot;nb&quot;&gt;eval&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;/usr/local/bin/gimme 1.8&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;GOPATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;/go
&lt;span class=&quot;nb&quot;&gt;export &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;PATH&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PATH&lt;/span&gt;:&lt;span class=&quot;nv&quot;&gt;$GOROOT&lt;/span&gt;/bin:&lt;span class=&quot;nv&quot;&gt;$GOPATH&lt;/span&gt;/bin

go get github.com/gophercloud/gophercloud
&lt;span class=&quot;nb&quot;&gt;pushd&lt;/span&gt; ~/go/src/github.com/gophercloud/gophercloud
go get &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; ./...
&lt;span class=&quot;nb&quot;&gt;popd

cat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; /root/.bashrc &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
if [[ -f /usr/local/bin/gimme ]]; then
  eval &quot;\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;/usr/local/bin/gimme 1.8&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot;
  export GOPATH=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$HOME&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;/go
  export PATH=\&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$PATH&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$GOROOT&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;/bin:\&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$GOPATH&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;/bin
fi

gophercloudtest() {
  if [[ -n \&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; ]] &amp;amp;&amp;amp; [[ -n \&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$2&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; ]]; then
    pushd  ~/go/src/github.com/gophercloud/gophercloud
    go test -v -tags &quot;fixtures acceptance&quot; -run &quot;\&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$1&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;&quot; github.com/gophercloud/gophercloud/acceptance/openstack/\&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$2&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt; | tee ~/gophercloud.log
    popd
  fi
}
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Next, add the following to &lt;code class=&quot;highlighter-rouge&quot;&gt;main.tf&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;  provider &quot;aws&quot; {
    region = &quot;us-west-2&quot;
  }

  resource &quot;aws_key_pair&quot; &quot;openstack&quot; {
    key_name   = &quot;openstack-key&quot;
    public_key = &quot;${file(&quot;key/id_rsa.pub&quot;)}&quot;
  }

  resource &quot;aws_security_group&quot; &quot;openstack&quot; {
    name        = &quot;openstack&quot;
    description = &quot;Allow all inbound/outbound traffic&quot;

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;tcp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;udp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

    ingress {
      from_port   = 0
      to_port     = 0
      protocol    = &quot;icmp&quot;
      cidr_blocks = [&quot;0.0.0.0/0&quot;]
    }

  }

  resource &quot;aws_spot_instance_request&quot; &quot;openstack&quot; {
    ami = &quot;ami-0c2aba6c&quot;
    spot_price = &quot;0.0440&quot;
    instance_type = &quot;m3.xlarge&quot;
    wait_for_fulfillment = true
    spot_type = &quot;one-time&quot;
    key_name = &quot;${aws_key_pair.openstack.key_name}&quot;

    security_groups = [&quot;default&quot;, &quot;${aws_security_group.openstack.name}&quot;]

    root_block_device {
      volume_size = 40
      delete_on_termination = true
    }

    tags {
      Name = &quot;OpenStack Test Infra&quot;
    }
  }

&lt;span class=&quot;gi&quot;&gt;+ resource &quot;null_resource&quot; &quot;openstack&quot; {
+  connection {
+    host        = &quot;${aws_spot_instance_request.openstack.public_ip}&quot;
+    user        = &quot;centos&quot;
+    private_key = &quot;${file(&quot;key/id_rsa&quot;)}&quot;
+  }
+
+  provisioner &quot;file&quot; {
+    source      = &quot;files&quot;
+    destination = &quot;/home/centos/files&quot;
+  }
+
+  provisioner &quot;remote-exec&quot; {
+    inline = [
+      &quot;sudo bash /home/centos/files/deploy.sh&quot;
+    ]
+  }
+ }
&lt;/span&gt;
  output &quot;ip&quot; {
    value = &quot;${aws_spot_instance_request.openstack.public_ip}&quot;
  }
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The above has added a &lt;a href=&quot;https://www.terraform.io/docs/provisioners/null_resource.html&quot;&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;null_resource&lt;/code&gt;&lt;/a&gt;.
&lt;code class=&quot;highlighter-rouge&quot;&gt;null_resource&lt;/code&gt; is simply an empty Terraform resource. It's commonly used to
store all provisioning steps. In this case, the above &lt;code class=&quot;highlighter-rouge&quot;&gt;null_resource&lt;/code&gt; is doing
the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Configuring the connection to the virtual machine.&lt;/li&gt;
  &lt;li&gt;Copying the &lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt; directory to the virtual machine.&lt;/li&gt;
  &lt;li&gt;Remotely running the &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt; script.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now when you run Terraform, once the Spot Instance has been created, Terraform
will copy the &lt;code class=&quot;highlighter-rouge&quot;&gt;files&lt;/code&gt; directory to it and then execute &lt;code class=&quot;highlighter-rouge&quot;&gt;deploy.sh&lt;/code&gt;. Terraform
will now take approximately 30-40 minutes to finish, but when it has finished,
OpenStack will be up and running.&lt;/p&gt;

&lt;p&gt;Since a new resource type has been added (&lt;code class=&quot;highlighter-rouge&quot;&gt;null_resource&lt;/code&gt;), you will need to run
&lt;code class=&quot;highlighter-rouge&quot;&gt;terraform init&lt;/code&gt;:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;pwd&lt;/span&gt;
/home/jtopjian/terraform-openstack-test
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Then to run everything from start to finish, do:&lt;/p&gt;

&lt;div class=&quot;language-shell highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform destroy
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;When Terraform is finished, you will have a fully functional OpenStack
environment suitable for testing.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This post detailed how to create an all-in-one OpenStack environment that is
suitable for testing applications. Additionally, all configuration was recorded
both in Terraform and shell scripts so the environment can be created
automatically.&lt;/p&gt;

&lt;p&gt;Granted, if you aren't creating a Go-based application, you will need to install
other dependencies, but it should be easy to figure out from the example
detailed here.&lt;/p&gt;

&lt;p&gt;While this setup is a great way to easily build a testing environment, there are
still other improvements that can be made. For example, instead of running
PackStack each time, an AMI image can be created which already has OpenStack
installed. Additionally, multi-node environments can be created for more
advanced testing. These methods will be detailed in future posts.&lt;/p&gt;
</description>
				<pubDate>Sat, 30 Sep 2017 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/building-openstack-environments.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/building-openstack-environments.html</guid>
			</item>
		
			<item>
				<title>Waffles Year End Review 2015</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;2015 was different for me. Instead of working on several small projects, I focused my time on two larger projects. One of them was &lt;a href=&quot;http://waffles.terrarum.net&quot;&gt;Waffles&lt;/a&gt;.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#status-of-waffles&quot; id=&quot;markdown-toc-status-of-waffles&quot;&gt;Status of Waffles&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#waffles-and-terraform&quot; id=&quot;markdown-toc-waffles-and-terraform&quot;&gt;Waffles and Terraform&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#waffles-and-shell&quot; id=&quot;markdown-toc-waffles-and-shell&quot;&gt;Waffles and Shell&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#future-plans&quot; id=&quot;markdown-toc-future-plans&quot;&gt;Future Plans&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;status-of-waffles&quot;&gt;Status of Waffles&lt;/h2&gt;

&lt;p&gt;Waffles continues to grow and evolve and I'm having a lot of fun with it.&lt;/p&gt;

&lt;p&gt;You can see all changes in its &lt;a href=&quot;https://github.com/jtopjian/waffles/blob/master/CHANGELOG.md&quot;&gt;CHANGELOG&lt;/a&gt; file, but to call out some of the most notable ones:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Git Profiles: Enables profiles to be stored in a remote Git repository and downloaded upon running Waffles.&lt;/li&gt;
  &lt;li&gt;Host Files: A special profile to store files for one particular host. These files are only ever copied to that one host. This is useful for SSL certificates.&lt;/li&gt;
  &lt;li&gt;Profile Data: Data specific to a profile, but still too generic for a site can be stored in &lt;code class=&quot;highlighter-rouge&quot;&gt;profiles/profile_name/data.sh&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Lots of new resources such as Consul, Git, Python virtualenv and pip, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;waffles-and-terraform&quot;&gt;Waffles and Terraform&lt;/h3&gt;

&lt;p&gt;Waffles and Terraform make a great pair. Terraform handles the creation of core IaaS resources and Waffles can be used to provision the compute-related resources. Because I find both Waffles and Teraform so useful, I recently created a &lt;a href=&quot;https://github.com/jtopjian/terraform-provisioner-waffles&quot;&gt;Waffles Terraform Provisioner&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;waffles-and-shell&quot;&gt;Waffles and Shell&lt;/h3&gt;

&lt;p&gt;Waffles provides a large library of &lt;a href=&quot;http://waffles.terrarum.net/functions/system/&quot;&gt;helpful Bash functions&lt;/a&gt;. By sourcing &lt;code class=&quot;highlighter-rouge&quot;&gt;lib/stdlib/system.sh&lt;/code&gt; in your shell, you can use any of them directly on the command line or in your own shell scripts. You can even use any of the Waffles resources directly on the command line!&lt;/p&gt;

&lt;h2 id=&quot;future-plans&quot;&gt;Future Plans&lt;/h2&gt;

&lt;p&gt;Here are some of the ideas I have for Waffles in 2016:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Better &quot;no-op&quot; support: I want Waffles to be more intelligent about performing a no-operation command; maybe showing the diff of a file and maybe running a syntax check against requested changes.&lt;/li&gt;
  &lt;li&gt;Publishing profiles: I've been adamant about not creating a Waffles profile / module community. I don't want to see Waffles Profiles made into large, generic scripts that account for a wide-variety of environments. I'd rather see small, agile scripts that are targeted for specific environments. However, that doesn't mean that profiles can't be published to Github for others to find and modify on their own. I plan to publish my own profiles soon.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I dedicated a lot of time in 2015 working on Waffles and I'm happy to say that it really paid off. I'm able to use Waffles to sketch out experimental systems as well as deploy stable, production systems very quickly. I plan to continue supporting and working on Waffles throughout 2016.&lt;/p&gt;
</description>
				<pubDate>Fri, 01 Jan 2016 00:00:00 -0700</pubDate>
				<link>http://terrarum.net/blog/waffles-year-end-review-2015.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/waffles-year-end-review-2015.html</guid>
			</item>
		
			<item>
				<title>Terraform Year End Review 2015</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;As I mentioned in my &lt;a href=&quot;http://terrarum.net/blog/waffles-year-end-review-2015.html&quot;&gt;Waffles&lt;/a&gt; year end review, I spent 2015 working on two major projects. Waffles was one project and Terraform was the other.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#terraform&quot; id=&quot;markdown-toc-terraform&quot;&gt;Terraform&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#project-review&quot; id=&quot;markdown-toc-project-review&quot;&gt;Project Review&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;terraform&quot;&gt;Terraform&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;http://terraform.io&quot;&gt;Terraform&lt;/a&gt; is an amazing tool. It can easily provision and connect resources across multiple clouds and providers. If you haven't tried it, I highly encourage you to do so.&lt;/p&gt;

&lt;p&gt;And if you use Terraform but haven't dug into its internals, I encourage you to try that, too. Terraform's code is very clean and easy to learn. It's easy to add support for new resources: from virtual machines, to MySQL or PostgreSQL databases, to SSL certificates.&lt;/p&gt;

&lt;p&gt;The number of resources that Terraform supports continues to grow. With new resources, new ways of using Terraform and interacting with resources are discovered. It's going to be very interesting to see how Terraform evolves.&lt;/p&gt;

&lt;h2 id=&quot;project-review&quot;&gt;Project Review&lt;/h2&gt;

&lt;p&gt;When Terraform was first releasted, I patiently waited for it to support OpenStack. Early in 2015, I got tired of waiting and began work on my own &lt;a href=&quot;https://github.com/jtopjian/terraform-provider-openstack&quot;&gt;provider&lt;/a&gt; based off of the last known work. Coincidentally, two other parties began similar work. Rather than have a three-way duplication, everyone convened on a single code base. In the spring of 2015, the code was merged into Terraform proper and Terraform finally had native OpenStack support. Once merged, I continued to work on the OpenStack functionality: fixing bugs, adding features, and answering questions.&lt;/p&gt;

&lt;p&gt;This type of project was very new to me. I'm a systems administrator by trade, so working on a large software project was something I wasn't familiar with at all. I'm still not, but I think I'm getting the hang of it. It's fun to play both sides, too: from a developer point of view, I have a much better appreciation for testing; from a sysadmin point of view, I'm mindful of making sure patches don't cause backwards incompatibilities.&lt;/p&gt;

&lt;p&gt;I plan to continue working on Terraform's OpenStack support throughout 2016. I'd like to see more OpenStack projects added to Terraform (Trove, Designate, Glance) as well as other non-OpenStack projects (such as &lt;a href=&quot;https://github.com/hashicorp/terraform/pull/4271&quot;&gt;Cobbler&lt;/a&gt;).&lt;/p&gt;
</description>
				<pubDate>Fri, 01 Jan 2016 00:00:00 -0700</pubDate>
				<link>http://terrarum.net/blog/terraform-year-end-review-2015.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/terraform-year-end-review-2015.html</guid>
			</item>
		
			<item>
				<title>Waffles</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;Waffles is a simple configuration management system written in Bash. It's small, simple, and has allowed me to easily sketch out various environments that I want to experiment with. If you want to jump right in, head over to the &lt;a href=&quot;http://waffles.terrarum.net&quot;&gt;homepage&lt;/a&gt;. The rest of this article will cover the history and some personal thoughts on why I created it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; I apologize if Jekyll flags this as a new post in the RSS feed. After originally publishing this article, I thought of a few other items I wanted to mention. In addition, I noticed Waffles was posted to &lt;a href=&quot;https://www.reddit.com/r/linux/comments/39koxm/waffles_is_a_simple_configuration_management/&quot;&gt;Reddit&lt;/a&gt;, so I wanted to address a few of the comments made.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#defining-the-problem&quot; id=&quot;markdown-toc-defining-the-problem&quot;&gt;Defining the Problem&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#attempted-solutions&quot; id=&quot;markdown-toc-attempted-solutions&quot;&gt;Attempted Solutions&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#first-version&quot; id=&quot;markdown-toc-first-version&quot;&gt;First Version&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#second-version&quot; id=&quot;markdown-toc-second-version&quot;&gt;Second Version&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#roadblock&quot; id=&quot;markdown-toc-roadblock&quot;&gt;Roadblock&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#third-version&quot; id=&quot;markdown-toc-third-version&quot;&gt;Third Version&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#fourth-and-final-version&quot; id=&quot;markdown-toc-fourth-and-final-version&quot;&gt;Fourth and Final Version&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#current-status&quot; id=&quot;markdown-toc-current-status&quot;&gt;Current Status&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#is-waffles-competing&quot; id=&quot;markdown-toc-is-waffles-competing&quot;&gt;Is Waffles Competing?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#is-waffles-better-than-x&quot; id=&quot;markdown-toc-is-waffles-better-than-x&quot;&gt;Is Waffles Better Than X?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;defining-the-problem&quot;&gt;Defining the Problem&lt;/h2&gt;

&lt;p&gt;The last article I wrote, &lt;a href=&quot;http://terrarum.net/blog/puppet-infrastructure-2015.html&quot;&gt;Puppet Infrastructure 2015&lt;/a&gt;, was quite a beast. In a way, it was a cathartic exercise to write down all of my practices to sanely work with Puppet every day. After I published the article, I couldn't get over the fact that all of those practices are &lt;em&gt;just&lt;/em&gt; for Puppet – not any of the services Puppet sets up and configures. It didn't sit right with me, so I began to look into why I found configuration management systems so complex.&lt;/p&gt;

&lt;p&gt;&quot;Complex&quot; and &quot;Simple&quot; are subjective terms. One core focus was the fact that Omnibus and all-in-one packages are becoming a popular way to ease the installation and configuration of the configuration management system. In my opinion, when configuring your configuration management system becomes so involved that it's easier to just have an all-in-one installation, that's &quot;complex&quot;. I wanted to see if it was possible to escape that. I wanted to see if it was possible to create a configuration management system that was able to work out of the box on modern Linux systems and able to be installed by simply cloning a git repository.&lt;/p&gt;

&lt;p&gt;Secondly, I wanted to see if it was possible to strip &quot;resource models&quot; down to more simple components. Some systems have modeled resources in a specific language, like Python, while others have chosen to use a subprocess to interact with the best native command available. Both methods have their merits, though I favor the latter more. I wanted to take that method further: why not just use a Unix shell and related tools? After all, for decades, the standard way of interacting with a Unix-based system has been through the command-line.&lt;/p&gt;

&lt;p&gt;That's not to say that resource models in configuration management systems are the way they are for no reason. Take Puppet's Types and Providers system, for example. Decoupling the &quot;type&quot; and &quot;provider&quot; has enabled Puppet to provide a common resource interface for a variety of platforms and backends. In addition, the Types and Providers system provides the user with a way to easily manage all resources of a given type on a system, provide input validation, and a lot of other features.&lt;/p&gt;

&lt;p&gt;I have nothing but respect for the Types and Providers system. But I still wanted to see if it was possible to create a tool that provided the same core idea (abstract a resource into a simple model that allowed for easy management) in a more simple way. It's an experiment to see what happens when a robust catalog system, input validation system, etc were removed for something more bare. Would chaos ensue?&lt;/p&gt;

&lt;p&gt;Third, I wanted to break out of the &quot;compiled manifest&quot; and &quot;workflow&quot; views of configuration management systems. Every configuration management system has some sort of DSL, and for a good reason. You can read about the decision to use a DSL with Puppet &lt;a href=&quot;https://puppetlabs.com/blog/why-puppet-has-its-own-configuration-language&quot;&gt;here&lt;/a&gt;. I have nothing against DSLs (you can see how plain Bash evolved into the Bash-based DSL in Waffles &lt;a href=&quot;http://waffles.terrarum.net/concepts/&quot;&gt;here&lt;/a&gt;), but I wanted to see if it was possible to not use a compiled manifest or workflow, and instead use a plain old top-down execution sequence.&lt;/p&gt;

&lt;p&gt;With these thoughts in mind, I set off to see what I could build.&lt;/p&gt;

&lt;p&gt;As a real-world test to validate my solution, I created a detailed list of steps that are required to build a simple, yet robust, Memcached service (this is the reason why there are so many references to Memcached in the Waffles documentation).&lt;/p&gt;

&lt;p&gt;You can easily get away with installing Memcached by doing:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; memcached&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;But what if you also need to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Edit &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/memcached.conf&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Have other common packages installed (logwatch, Postfix, fail2ban, Sensu, Nagios, etc)&lt;/li&gt;
  &lt;li&gt;Have some standard users or SSH keys installed&lt;/li&gt;
  &lt;li&gt;Have some standard firewall rules installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see how the idea of a &quot;simple&quot; Memcached service quickly becomes a first-class service in your environment. But does that mean the configuration management system must be &lt;em&gt;complex&lt;/em&gt; to satisfy this?&lt;/p&gt;

&lt;h2 id=&quot;attempted-solutions&quot;&gt;Attempted Solutions&lt;/h2&gt;

&lt;h3 id=&quot;first-version&quot;&gt;First Version&lt;/h3&gt;

&lt;p&gt;My initial solution was written in Golang. This was because I was doing a lot of work with OpenStack and Terraform and it minimized context switching to stay with Go. I ended up with a working system that compiled shell scripts together into a master shell script &quot;role&quot; (my term for a fully-defined node) and referenced data stored in a hierarchical YAML system (a very watered down Hiera).&lt;/p&gt;

&lt;p&gt;It looked like this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;na&quot;&gt;roles&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;memcached&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;server_types&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;lxc&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;na&quot;&gt;environments&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;pi&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;production&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;testing&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The program would then search a structured directory for shell scripts that contained &lt;code class=&quot;highlighter-rouge&quot;&gt;memcached&lt;/code&gt;, &lt;code class=&quot;highlighter-rouge&quot;&gt;lxc&lt;/code&gt;, &lt;code class=&quot;highlighter-rouge&quot;&gt;production&lt;/code&gt;, and &lt;code class=&quot;highlighter-rouge&quot;&gt;testing&lt;/code&gt; and create two master scripts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;memcached_lxc_production.sh&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;memcached_lxc_testing.sh&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn't like it at all. The first thing I removed was the YAML data system. I felt that it carried too much overhead. Then I decided to get rid of Go altogether. The Go component was only being used to organize shell scripts. Why not just make everything in shell?&lt;/p&gt;

&lt;h3 id=&quot;second-version&quot;&gt;Second Version&lt;/h3&gt;

&lt;p&gt;The next major iteration was completely written in Bash. It used Bash variables to store data and Bash arrays to store a list of scripts that should be executed. Here's what a &quot;role&quot; looked like:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;ATTRIBUTES&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=(&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;environment]&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;production&quot;&lt;/span&gt;
  &lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;location]&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;honolulu&quot;&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;

&lt;span class=&quot;nv&quot;&gt;RUN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=(&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;site/acng.sh&quot;&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;site/packages.sh&quot;&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;rabbitmq&quot;&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;rabbitmq/repo.sh&quot;&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;rabbitmq/server.sh&quot;&lt;/span&gt;
  &lt;span class=&quot;s2&quot;&gt;&quot;openstack/rabbitmq.sh&quot;&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;ATTRIBUTES&lt;/code&gt; hash stored key/value pairs that described unique features of the role. The &lt;code class=&quot;highlighter-rouge&quot;&gt;RUN&lt;/code&gt; array stored a list of scripts that would be executed in a top-down order. This was working very well and it enabled me to easily deploy all sorts of environments. I wasn't totally happy with the design, but I couldn't figure out what exactly I didn't like.&lt;/p&gt;

&lt;h3 id=&quot;roadblock&quot;&gt;Roadblock&lt;/h3&gt;

&lt;p&gt;That got put on hold when I ran into a major roadblock: I was deploying a Consul cluster when I ran into the need to use a JSON-based configuration file. What would be the best way to handle it?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Static files meant data embedded inside the file and outside of the data system&lt;/li&gt;
  &lt;li&gt;Templates meant another layer of programming logic&lt;/li&gt;
  &lt;li&gt;External languages broke the Bash-only feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was stumped for a few days until I realized: &lt;em&gt;Augeas&lt;/em&gt;! I've always had a lot of respect for the Augeas project but never had a reason to use it – until now. Even better, Augeas was able to cleanly parse and write JSON files. So I made Augeas an optional, but encouraged, component of Waffles.&lt;/p&gt;

&lt;h3 id=&quot;third-version&quot;&gt;Third Version&lt;/h3&gt;

&lt;p&gt;Now back to figuring out why I didn't like the current iteration. I realized I didn't like the format of the &quot;role&quot;. I wanted the &quot;role&quot; to look more like a shell script and not like a metadata file that describes a system.&lt;/p&gt;

&lt;p&gt;So I made some changes and was happier. The above RabbitMQ role now looked like:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;stdlib.data common
stdlib.data openstack/rabbitmq

stdlib.profile site/acng
stdlib.profile site/packages
stdlib.module rabbitmq
stdlib.module rabbitmq/repo
stdlib.module rabbitmq/server
stdlib.module openstack/rabbitmq&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;fourth-and-final-version&quot;&gt;Fourth and Final Version&lt;/h3&gt;

&lt;p&gt;There was still one part that bugged me: modules. In the above example, &lt;code class=&quot;highlighter-rouge&quot;&gt;rabbitmq&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;openstack&lt;/code&gt; were both modules. I didn't like how I had &lt;code class=&quot;highlighter-rouge&quot;&gt;profiles&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;modules&lt;/code&gt; mixed in the role. I refactored the above so that &lt;code class=&quot;highlighter-rouge&quot;&gt;profiles&lt;/code&gt; were just an abstraction of &lt;code class=&quot;highlighter-rouge&quot;&gt;modules&lt;/code&gt;, like in Puppet, but I didn't like that either. Finally, I decided to get rid of modules altogether. You can read more about this decision &lt;a href=&quot;http://waffles.terrarum.net/modules/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;So modules went away and the above turned into:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;stdlib.data common
stdlib.data openstack/rabbitmq

stdlib.profile common/acng
stdlib.profile common/packages
stdlib.profile rabbitmq/repo
stdlib.profile rabbitmq/server
stdlib.profile openstack/rabbitmq&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;At that point I was very satisfied with how it looked. I used it for a week or two and was still happy and decided to finally release it publicly.&lt;/p&gt;

&lt;h2 id=&quot;current-status&quot;&gt;Current Status&lt;/h2&gt;

&lt;p&gt;Now I have a tool that I find fits my definition of &quot;simple&quot; and &quot;intuitive&quot;.&lt;/p&gt;

&lt;p&gt;But Waffles is nowhere near complete. The only milestone that has been reached is that I'm happy with its core design. Waffles certainly isn't at feature parity with other configuration management systems. For some features, this is only a matter of time. For other features, they just aren't applicable to Waffles's core ideas. For example, I want to have some kind of pull-based mechanism so clients can contact a central Waffles server. At the same time, I don't feel a PuppetDB or Etcd database belongs in Waffles.&lt;/p&gt;

&lt;h2 id=&quot;is-waffles-competing&quot;&gt;Is Waffles Competing?&lt;/h2&gt;

&lt;p&gt;Waffles is just another tool. I have no intention to market Waffles as a competitor to other configuration management systems. The only reason Puppet has been called out throughout this article is because it's the system that I'm most knowledgeable with.&lt;/p&gt;

&lt;h3 id=&quot;is-waffles-better-than-x&quot;&gt;Is Waffles Better Than X?&lt;/h3&gt;

&lt;p&gt;Again, Waffles is just another tool that does a certain action in a certain way. The only thing that matters is what tool you find most useful. I have nothing but respect for every other configuration management system available today. If it wasn't for those projects, I wouldn't have been able to learn what I like and don't like in order to create my own.&lt;/p&gt;

&lt;p&gt;As mentioned above, Puppet is called out a lot because it's the system I'm most knowledgeable with. Puppet is a great tool and I still use it everyday. I've also spent time with Chef, Ansible, Juju, and Salt and find them to be great tools, too.&lt;/p&gt;

&lt;p&gt;I will make one comment about Ansible, though, because comparisons &lt;a href=&quot;https://www.reddit.com/r/linux/comments/39koxm/waffles_is_a_simple_configuration_management/cs4txam&quot;&gt;will be made&lt;/a&gt; (I'm assuming because Ansible is known to be a small and simple configuration management system): I feel Waffles and Ansible are two separate tools that do things very differently. I feel the biggest difference is that Ansible is a &lt;a href=&quot;http://stackstorm.com/2015/04/10/the-return-of-workflows/&quot;&gt;workflow-based system&lt;/a&gt; while Waffles is just a bunch of shell functions that execute from top to bottom.&lt;/p&gt;

&lt;p&gt;I take no offence if you don't like Waffles. You can see from a lot of the other &lt;a href=&quot;https://github.com/jtopjian&quot;&gt;tools and code I've created&lt;/a&gt; that I do things very differently sometimes – that's just me.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This article gave the history and thought process behind a new project of mine: &lt;a href=&quot;http://waffles.terrarum.net&quot;&gt;Waffles&lt;/a&gt;. Try it out and tell your friends. Feedback is welcome!&lt;/p&gt;
</description>
				<pubDate>Thu, 11 Jun 2015 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/waffles.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/waffles.html</guid>
			</item>
		
			<item>
				<title>Puppet Infrastructure 2015</title>
				<description>&lt;h2 id=&quot;preface&quot;&gt;Preface&lt;/h2&gt;

&lt;p&gt;My previous article, &lt;a href=&quot;http://terrarum.net/blog/puppet-infrastructure.html&quot;&gt;Puppet Infrastructure&lt;/a&gt; was written back in 2013. It's one of the most popular articles on my blog and seems to have helped a good number of people (as well as &lt;a href=&quot;https://groups.google.com/d/msg/puppet-users/iwVjFP6iO3s/JJlyXTDYwkEJ&quot;&gt;draw criticism&lt;/a&gt; from others).&lt;/p&gt;

&lt;p&gt;With it being 2015 and all, I thought I'd write an updated version and include some notes and thoughts from using Puppet throughout 2014.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#preface&quot; id=&quot;markdown-toc-preface&quot;&gt;Preface&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#prep-work&quot; id=&quot;markdown-toc-prep-work&quot;&gt;Prep-work&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-puppet-labs-package-repo&quot; id=&quot;markdown-toc-the-puppet-labs-package-repo&quot;&gt;The Puppet Labs Package Repo&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#install-and-configure-puppet-server&quot; id=&quot;markdown-toc-install-and-configure-puppet-server&quot;&gt;Install and Configure Puppet Server&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#configure-puppet-server&quot; id=&quot;markdown-toc-configure-puppet-server&quot;&gt;Configure Puppet Server&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#the-future-parser&quot; id=&quot;markdown-toc-the-future-parser&quot;&gt;The Future Parser&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#manifest-ordering&quot; id=&quot;markdown-toc-manifest-ordering&quot;&gt;Manifest Ordering&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#generate-an-ssl-certificate&quot; id=&quot;markdown-toc-generate-an-ssl-certificate&quot;&gt;Generate an SSL Certificate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#install-and-configure-puppetdb&quot; id=&quot;markdown-toc-install-and-configure-puppetdb&quot;&gt;Install and configure PuppetDB&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#setting-up-version-control&quot; id=&quot;markdown-toc-setting-up-version-control&quot;&gt;Setting Up Version Control&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#what-about-environments&quot; id=&quot;markdown-toc-what-about-environments&quot;&gt;What About Environments?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#building-the-site-module&quot; id=&quot;markdown-toc-building-the-site-module&quot;&gt;Building the Site Module&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#ntp&quot; id=&quot;markdown-toc-ntp&quot;&gt;NTP&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-note-about-the-module-subcommand&quot; id=&quot;markdown-toc-a-note-about-the-module-subcommand&quot;&gt;A Note About the Module Subcommand&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#keeping-track-of-modules&quot; id=&quot;markdown-toc-keeping-track-of-modules&quot;&gt;Keeping Track of Modules&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#configuring-ntp&quot; id=&quot;markdown-toc-configuring-ntp&quot;&gt;Configuring NTP&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#roles-and-profiles&quot; id=&quot;markdown-toc-roles-and-profiles&quot;&gt;Roles and Profiles&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#all-your-base&quot; id=&quot;markdown-toc-all-your-base&quot;&gt;All Your Base&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#class-or-include-or-contain&quot; id=&quot;markdown-toc-class-or-include-or-contain&quot;&gt;Class or Include or Contain?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-first-puppet-run&quot; id=&quot;markdown-toc-the-first-puppet-run&quot;&gt;The First Puppet Run&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#configuring-the-firewall&quot; id=&quot;markdown-toc-configuring-the-firewall&quot;&gt;Configuring the Firewall&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#hiera-and-the-firewall&quot; id=&quot;markdown-toc-hiera-and-the-firewall&quot;&gt;Hiera and the Firewall&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#my-hiera-structure&quot; id=&quot;markdown-toc-my-hiera-structure&quot;&gt;My Hiera Structure&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#what-goes-in-hiera-and-what-uses-hiera&quot; id=&quot;markdown-toc-what-goes-in-hiera-and-what-uses-hiera&quot;&gt;What Goes in Hiera and What Uses Hiera?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#finishing-up-the-puppet-server-role&quot; id=&quot;markdown-toc-finishing-up-the-puppet-server-role&quot;&gt;Finishing up the Puppet Server Role&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#ref&quot; id=&quot;markdown-toc-ref&quot;&gt;Ref?&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#creating-the-puppet-server-profile&quot; id=&quot;markdown-toc-creating-the-puppet-server-profile&quot;&gt;Creating the Puppet Server Profile&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#my-opinion-on-automatic-parameter-lookups&quot; id=&quot;markdown-toc-my-opinion-on-automatic-parameter-lookups&quot;&gt;My Opinion on Automatic Parameter Lookups&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#committing-everything&quot; id=&quot;markdown-toc-committing-everything&quot;&gt;Committing Everything&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#resources&quot; id=&quot;markdown-toc-resources&quot;&gt;Resources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;This tutorial will explain how to create a new Puppet environment using best practices such as version control, a site-local module, and roles &amp;amp; profiles. It will also include some practices that aren't considered &quot;best&quot;, but have helped my sanity while using Puppet in production on a daily basis. It assumes the reader will be using Ubuntu. The instructions were verified with Ubuntu 14.04.&lt;/p&gt;

&lt;p&gt;Note that this is &lt;em&gt;not&lt;/em&gt; an intro to Puppet – this tutorial assumes that you have at least a beginner's knowledge of Puppet.&lt;/p&gt;

&lt;h2 id=&quot;prep-work&quot;&gt;Prep-work&lt;/h2&gt;

&lt;h3 id=&quot;the-puppet-labs-package-repo&quot;&gt;The Puppet Labs Package Repo&lt;/h3&gt;

&lt;p&gt;Install the Puppet Labs apt repo:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;wget http://apt.puppetlabs.com/puppetlabs-release-trusty.deb
dpkg &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.deb
rm &lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;.deb
apt-get update&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;install-and-configure-puppet-server&quot;&gt;Install and Configure Puppet Server&lt;/h3&gt;

&lt;p&gt;This tutorial will use Puppet Server – the next generation of Puppet Master. You can read more about it &lt;a href=&quot;http://puppetlabs.com/blog/puppet-server-bringing-soa-to-a-puppet-master-near-you&quot;&gt;here&lt;/a&gt; and &lt;a href=&quot;https://github.com/puppetlabs/puppet-server&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To install it, do:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;apt-get install &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; puppetserver&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;I haven't used Puppet Server extensively yet, but it's worked fairly well so far. I plan to use it throughout 2015 and if I run into any major issues, I'll update this tutorial.&lt;/p&gt;

&lt;h3 id=&quot;configure-puppet-server&quot;&gt;Configure Puppet Server&lt;/h3&gt;

&lt;p&gt;By default, Puppet Server requires 2gb of RAM. If your server does not have 2gb, you can lower the amount by editing &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/default/puppetserver&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;sed &lt;span class=&quot;nt&quot;&gt;-i&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-e&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'s/JAVA_ARGS=&quot;-Xms2g -Xmx2g -XX:MaxPermSize=256m&quot;/JAVA_ARGS=&quot;-Xms1g -Xmx1g -XX:MaxPermSize=256m&quot;/'&lt;/span&gt; /etc/default/puppetserver&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Additionally, I configure my Puppet servers with the following two settings:&lt;/p&gt;

&lt;h4 id=&quot;the-future-parser&quot;&gt;The Future Parser&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet config &lt;span class=&quot;nb&quot;&gt;set&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--section&lt;/span&gt; main parser future&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;I've been using the future parser since it was first announced in Puppet 3.2. In my opinion, iteration was a sorely missing feature in Puppet and I have abused the hell out of it since it became available.&lt;/p&gt;

&lt;p&gt;However cool the future parser may be, keep in mind that the features it provides are not official. Future releases could remove them, or worse, break them and thus break your environment. This actually happened to me with the Puppet 3.5 release and I spent the afternoon recovering from an almost catastrophic puppet run.&lt;/p&gt;

&lt;h4 id=&quot;manifest-ordering&quot;&gt;Manifest Ordering&lt;/h4&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet config &lt;span class=&quot;nb&quot;&gt;set&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--section&lt;/span&gt; main ordering manifest&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;I've included a list of great resources at the end of this article. One of them covers the &quot;manifest ordering&quot; feature in detail. It's a great read and I agree with its principles on why you &lt;em&gt;shouldn't&lt;/em&gt; use it.&lt;/p&gt;

&lt;p&gt;When I first started using Puppet, I hated dependency-based ordering, but since there was no other option, I had to live with it and eventually it became second nature.&lt;/p&gt;

&lt;p&gt;However, if I could get back the ridiculous amount of hours I've spent trying to resolve ordering issues, I'd have an awesome vacation. Dependency-based ordering is great in theory, but unless you write all of your Puppet manifests and modules yourself and are very strict to follow dependency conventions, it's very difficult to get right in practice. Because of this, I've begun using manifest ordering.&lt;/p&gt;

&lt;p&gt;I understand that by using manifest ordering, I'm not helping solve the larger issue of fixing dependency problems when I find them in others' modules. I wish I had more time to do that, but unfortunately I'm not paid to work on Puppet all day – sometimes I need to get back to work.&lt;/p&gt;

&lt;p&gt;And to play devil's advocate: really, where else am I going to have to use a dependency-based system? Everything else I use executes in a top-down format. And has &lt;code class=&quot;highlighter-rouge&quot;&gt;for&lt;/code&gt; loops.&lt;/p&gt;

&lt;h3 id=&quot;generate-an-ssl-certificate&quot;&gt;Generate an SSL Certificate&lt;/h3&gt;

&lt;p&gt;Since the Puppet Server has just been installed, it hasn't generated an SSL cert yet. It will do this the first time you start the service, but the cert will be needed in the following step. To generate the cert, do:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet cert generate &lt;span class=&quot;k&quot;&gt;$(&lt;/span&gt;puppet config print certname&lt;span class=&quot;k&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;install-and-configure-puppetdb&quot;&gt;Install and configure PuppetDB&lt;/h2&gt;

&lt;p&gt;PuppetDB is a complementary service to Puppet. It's not required, but it's very useful. Unfortunately, this tutorial will only cover the installation of PuppetDB and not what makes it so useful.&lt;/p&gt;

&lt;p&gt;To install and configure it:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet/modules
puppet module install puppetlabs/puppetdb
apt-get &lt;span class=&quot;nt&quot;&gt;-y&lt;/span&gt; install puppetdb
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /root
&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;include puppetdb &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; pdb.pp
&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;include puppetdb::master::config &lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; pdb.pp
puppet apply &lt;span class=&quot;nt&quot;&gt;--verbose&lt;/span&gt; pdb.pp
rm pdb.pp
rm &lt;span class=&quot;nt&quot;&gt;-rf&lt;/span&gt; /etc/puppet/modules/&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Note that the certificate generated in the previous step must exist for PuppetDB to install cleanly.&lt;/p&gt;

&lt;h2 id=&quot;setting-up-version-control&quot;&gt;Setting Up Version Control&lt;/h2&gt;

&lt;p&gt;It's a best practice to keep all the configuration management information in a version control provider such as &lt;code class=&quot;highlighter-rouge&quot;&gt;git&lt;/code&gt;. Since the repository will probably contain sensitive information about your environment, it's recommended to use an internal &lt;code class=&quot;highlighter-rouge&quot;&gt;git&lt;/code&gt; server. Managing one is very easy by using &lt;a href=&quot;http://gitolite.com/gitolite/master-toc.html&quot;&gt;gitolite&lt;/a&gt;.&lt;/p&gt;

&lt;p class=&quot;alert alert-error&quot;&gt;Seriously. Do &lt;em&gt;not&lt;/em&gt; use Github, or any other public git host, if your repository will contain sensitive information. And if you accidentally check-in sensitive information, delete the entire repository immediately. Don't just delete the sensitive information and commit the changes. It sounds like common sense, I know, but it happens. Also consider using some type of encryption on your repo like &lt;a href=&quot;https://github.com/crayfishx/hiera-gpg&quot;&gt;hiera-gpg&lt;/a&gt; or &lt;a href=&quot;https://github.com/StackExchange/blackbox&quot;&gt;blackbox&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some people keep the entire &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet&lt;/code&gt; directory in the repository. There's nothing wrong with this and if this is what you'd like to do, the following should work:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet
git init
git add auth.conf fileserver.conf puppet.conf manifests/ modules/
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Initial commit&quot;&lt;/span&gt;
git remote add origin &amp;lt;remote repo location&amp;gt;
git push &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; origin master&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Another way of keeping all Puppet configuration in a repository is to create a module that will only be used for the particular Puppet environment being created. This method will be used for this tutorial.&lt;/p&gt;

&lt;p&gt;Begin by creating a module:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet/modules
mkdir site
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;site
mkdir manifests ext data
touch data/.keep
touch manifests/.keep
&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;ext
touch site.pp
ln &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /etc/puppet/modules/site/ext/site.pp /etc/puppet/manifests/site.pp
&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; ..
git init
git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Initial commit&quot;&lt;/span&gt;
git remote add origin &amp;lt;remote repo location&amp;gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Three directories were created inside the &lt;code class=&quot;highlighter-rouge&quot;&gt;site&lt;/code&gt; module:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;manifests: where your site-local manifests will go&lt;/li&gt;
  &lt;li&gt;data: where your Hiera data will go&lt;/li&gt;
  &lt;li&gt;ext: where your &quot;extra&quot; files will go. These files are complementary or supplementary to your environment as well as the main &lt;code class=&quot;highlighter-rouge&quot;&gt;site.pp&lt;/code&gt; file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;what-about-environments&quot;&gt;What About Environments?&lt;/h3&gt;

&lt;p&gt;I experimented with environments throughout 2013 and 2014 and ultimately decided to not use them. By not using environments, my production sites now have multiple Puppet servers deployed: one for each project or domain of responsibility. If I used environments effectively, all of those Puppet servers could be combined into a single server.&lt;/p&gt;

&lt;p&gt;However, I found that using environments in production caused some issues:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If the location that housed the main Puppet server was down (lost power, etc), no nodes anywhere could talk to Puppet.&lt;/li&gt;
  &lt;li&gt;If the filesystem on the central Puppet server became corrupt, it could affect all nodes. This happened to me in 2014, but the damage was restricted to only one project.&lt;/li&gt;
  &lt;li&gt;A single main Puppet server would require all projects to work on the same version of Puppet. This is not possible for some projects.&lt;/li&gt;
  &lt;li&gt;Similarly, upgrading Puppet means upgrading across the entire &quot;federation&quot;.&lt;/li&gt;
  &lt;li&gt;Having to type extra characters and tabs to reach the environment got tedious (&lt;code class=&quot;highlighter-rouge&quot;&gt;cd /etc/puppet/env&amp;lt;tab&amp;gt;/project-name&amp;lt;tab&amp;gt;/mod&amp;lt;tab&amp;gt;/site&lt;/code&gt;). Though this was not a significant reason, it was a sense of relief to just go back to &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/mod&amp;lt;tab&amp;gt;/site&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With regard to development and testing deployments, it's way too easy for me to use Vagrant to fire up a small Puppet server than to configure a central Puppet server with environments.&lt;/p&gt;

&lt;p&gt;This isn't to say that environments are a useless feature. I've just found that they haven't worked well for my specific use-cases.&lt;/p&gt;

&lt;h2 id=&quot;building-the-site-module&quot;&gt;Building the Site Module&lt;/h2&gt;

&lt;p&gt;At this point, Puppet is installed and an empty &lt;code class=&quot;highlighter-rouge&quot;&gt;site&lt;/code&gt; module exists. Now we'll begin using Puppet to configure the Puppet server itself as well as any other server you place under Puppet control.&lt;/p&gt;

&lt;h3 id=&quot;ntp&quot;&gt;NTP&lt;/h3&gt;

&lt;p&gt;Things begin to break when two servers with skewed times try communicating with each other. To ensure this doesn't happen, NTP will be installed and configured. First, install the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/ntp&lt;/code&gt; Puppet module:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet/modules
puppet module install puppetlabs/ntp&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;a-note-about-the-module-subcommand&quot;&gt;A Note About the Module Subcommand&lt;/h3&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;puppet&lt;/code&gt; command has a built-in subcommand to install modules. It's able to find the module by looking it up at the &lt;a href=&quot;http://forge.puppetlabs.com&quot;&gt;Forge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are pros and cons to this. On one hand, it provides an easy way to install a module plus any other modules it depends on. On the other hand, if you install a module that has a conflicting dependency with another module, the command will break. Additionally, sometimes the version of the module hosted at the Forge is outdated. When this happens, you need to manually download the module from its designated home – usually github.&lt;/p&gt;

&lt;p&gt;I usually clone the modules directly from Github. The &lt;code class=&quot;highlighter-rouge&quot;&gt;puppet module&lt;/code&gt; command was included in this tutorial as an example.&lt;/p&gt;

&lt;h3 id=&quot;keeping-track-of-modules&quot;&gt;Keeping Track of Modules&lt;/h3&gt;

&lt;p&gt;The previous version of this article showed how to create a &lt;code class=&quot;highlighter-rouge&quot;&gt;bash&lt;/code&gt; script that will re-download all modules you use. There's nothing wrong with that method and it works great.&lt;/p&gt;

&lt;p&gt;The previous version also mentioned &lt;a href=&quot;http://librarian-puppet.com/&quot;&gt;librarian-puppet&lt;/a&gt;. I tried to use it, but found its dependency resolution and module metadata checks to be way too strict.&lt;/p&gt;

&lt;p&gt;Then I started using &lt;a href=&quot;http://somethingsinistral.net/blog/rethinking-puppet-deployment/&quot;&gt;r10k&lt;/a&gt; which you can read about &lt;a href=&quot;http://terrarum.net/blog/puppet-infrastructure-with-r10k.html&quot;&gt;here&lt;/a&gt;. It's another great tool, but it didn't make a lot of sense to keep using it once I stopped using environments.&lt;/p&gt;

&lt;p&gt;Dan Bode has a tool called &lt;a href=&quot;https://github.com/bodepd/librarian-puppet-simple&quot;&gt;librarian-puppet-simple&lt;/a&gt; that is a stripped down version of &lt;code class=&quot;highlighter-rouge&quot;&gt;librarian-puppet&lt;/code&gt;. It simply installs a set of modules that you list in a &lt;code class=&quot;highlighter-rouge&quot;&gt;Puppetfile&lt;/code&gt; – no dependency or metadata checks. Like Dan, &lt;code class=&quot;highlighter-rouge&quot;&gt;librarian-puppet-simple&lt;/code&gt; is really awesome, and you should check it out. This is what I've been using for quite a while now and don't see a reason to stop.&lt;/p&gt;

&lt;h3 id=&quot;configuring-ntp&quot;&gt;Configuring NTP&lt;/h3&gt;

&lt;p&gt;Now that the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/ntp&lt;/code&gt; module is installed, it can be used to install and configure NTP on any server under Puppet control. The &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/ntp&lt;/code&gt; module is a simple module and rarely needs any parameters.&lt;/p&gt;

&lt;p&gt;In the &lt;code class=&quot;highlighter-rouge&quot;&gt;site.pp&lt;/code&gt; file, add the following:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;node&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppet.example.com'&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::ntp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;roles-and-profiles&quot;&gt;Roles and Profiles&lt;/h3&gt;

&lt;p&gt;There's an issue with this, though. This class will need applied to every server:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;node&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppet.example.com'&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::ntp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;node&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'www.example.com'&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::ntp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;node&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'db.example.com'&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::ntp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;There's a lot of repeated configuration here and it'll only get worse as more modules are added. A better way to apply modules to nodes is to use the &quot;Roles and Profiles&quot; pattern. The end of this article has some links that will describe this pattern in detail.&lt;/p&gt;

&lt;p&gt;A good profile to start with is the &quot;base&quot; profile. This profile will be applied to &lt;em&gt;all&lt;/em&gt; servers, so it's important that this profile contains very generic and global settings. To start, create a new manifest called &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/manifests/profiles/base.pp&lt;/code&gt; with the following contents:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::profiles::base&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::ntp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Next, create a role:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::roles::puppet_server&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;contain&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;site::profiles::base&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Finally, apply the role to the node:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;node&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppet.example.com'&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;contain&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;site::roles::puppet_server&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;With only the &lt;code class=&quot;highlighter-rouge&quot;&gt;ntp&lt;/code&gt; module being used in &lt;code class=&quot;highlighter-rouge&quot;&gt;site::profiles::base&lt;/code&gt;, this actually seems more complicated. To better show the usefulness of profiles, add the following to the &lt;code class=&quot;highlighter-rouge&quot;&gt;base&lt;/code&gt; profile:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$packages&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'git'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'vim'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;package&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$packages&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;py&quot;&gt;ensure&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;latest&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Before, you would have had to add those two lines to each node. Now you just add it to one profile and it gets applied to every node that has the &lt;code class=&quot;highlighter-rouge&quot;&gt;base&lt;/code&gt; profile applied to.&lt;/p&gt;

&lt;h4 id=&quot;all-your-base&quot;&gt;All Your Base&lt;/h4&gt;

&lt;p&gt;Since my &quot;base&quot; settings are so common across different Puppet-controlled environments, I have started creating an actual &quot;base&quot; module called &lt;a href=&quot;https://github.com/jtopjian/puppet-bass&quot;&gt;bass&lt;/a&gt;. I haven't yet decided to pronounce it bass as in &quot;base&quot; or bass as in the fish.&lt;/p&gt;

&lt;h4 id=&quot;class-or-include-or-contain&quot;&gt;Class or Include or Contain?&lt;/h4&gt;

&lt;p&gt;There's a lot of great documentation on this subject that is written better than I ever could. See the end of this article for links. Once you've read everything and understand the history between these three keywords, here's my $0.02:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I use &lt;code class=&quot;highlighter-rouge&quot;&gt;contain&lt;/code&gt; in my roles and nodes. This is because I enforce a no-parameter policy in roles and nodes.&lt;/li&gt;
  &lt;li&gt;I use &lt;code class=&quot;highlighter-rouge&quot;&gt;class&lt;/code&gt; in my profiles since those do use parameters. I still sometimes use Anchors and explicit ordering, but I'm finding that these are not needed (as much) in Puppet 3.7+ and Puppet Server and by using manifest ordering.&lt;/li&gt;
  &lt;li&gt;Sometimes I'll throw a &lt;code class=&quot;highlighter-rouge&quot;&gt;contain&lt;/code&gt; or &lt;code class=&quot;highlighter-rouge&quot;&gt;include&lt;/code&gt; in the profiles, though, if I'm positive that I'll never need to add parameters to them and that their ordering is stable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;the-first-puppet-run&quot;&gt;The First Puppet Run&lt;/h3&gt;

&lt;p&gt;At this point, Puppet can be run for the first time. If all goes well, NTP will be installed and running when Puppet has finished:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet apply &lt;span class=&quot;nt&quot;&gt;--verbose&lt;/span&gt; /etc/puppet/manifests/site.pp&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;configuring-the-firewall&quot;&gt;Configuring the Firewall&lt;/h3&gt;

&lt;p&gt;Next, the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/firewall&lt;/code&gt; module will be used to build the basis of a deny-by-default firewall.&lt;/p&gt;

&lt;p&gt;Install the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/firewall&lt;/code&gt; module by cloning it from github:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet/modules
git clone https://github.com/puppetlabs/puppetlabs-firewall firewall&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Add it to your &lt;code class=&quot;highlighter-rouge&quot;&gt;Puppetfile&lt;/code&gt;, too:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-ruby&quot; data-lang=&quot;ruby&quot;&gt;&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'firewall'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-firewall'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Next, a new manifest called &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/manifests/firewall.pp&lt;/code&gt; will be created:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'000 accept all icmp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;proto&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'icmp'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;action&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'accept'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'001 accept all to lo interface'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;proto&lt;/span&gt;   &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'all'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;iniface&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'lo'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;action&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'accept'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'002 accept related established rules'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;proto&lt;/span&gt;   &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'all'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;ctstate&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'RELATED'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'ESTABLISHED'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;action&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'accept'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'999 drop all'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;proto&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'all'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;action&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'drop'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Next, add the following to the &lt;code class=&quot;highlighter-rouge&quot;&gt;site::profiles::base&lt;/code&gt; profile, before the base packages are applied:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::firewall'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::site::firewall'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now all servers will have a deny-by-default firewall applied. I wouldn't recommend applying this configuration yet because you'll be locked out if you're working on this server remotely.&lt;/p&gt;

&lt;h3 id=&quot;hiera-and-the-firewall&quot;&gt;Hiera and the Firewall&lt;/h3&gt;

&lt;p&gt;This is a good place to introduce &lt;a href=&quot;http://docs.puppetlabs.com/hiera/1/index.html&quot;&gt;Hiera&lt;/a&gt; - a tool to store structured configuration data outside of Puppet manifests. Hiera is installed by default with the Puppet package, so installing a &lt;code class=&quot;highlighter-rouge&quot;&gt;hiera&lt;/code&gt; package is not needed. However, to utilize Hiera's &lt;em&gt;merging&lt;/em&gt; feature, the &lt;code class=&quot;highlighter-rouge&quot;&gt;deep_merge&lt;/code&gt; gem needs to be installed:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;gem install deep_merge&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;To configure Hiera, create &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/ext/hiera.yaml&lt;/code&gt; with the following contents:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;:backends&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;yaml&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;:hierarchy&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;common&quot;&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;:yaml&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;:datadir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;/etc/puppet/modules/site/data&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;:merge_behavior&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;deeper&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Next, link this configuration file to two locations:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;ln &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /etc/puppet/modules/site/ext/hiera.yaml /etc/
ln &lt;span class=&quot;nt&quot;&gt;-s&lt;/span&gt; /etc/puppet/modules/site/ext/hiera.yaml /etc/puppet&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The first location is so you can use the &lt;code class=&quot;highlighter-rouge&quot;&gt;hiera&lt;/code&gt; command-line tool. The second location is for Puppet itself.&lt;/p&gt;

&lt;p&gt;At the moment, the only hierarchy configured is &lt;code class=&quot;highlighter-rouge&quot;&gt;common&lt;/code&gt;. This means that Hiera will only read data from a single file: &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/data/common.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Add the following to that file:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;trusted_networks&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;192.168.1.0/24'&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;10.255.0.0/24'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Add any other networks or hosts (&lt;code class=&quot;highlighter-rouge&quot;&gt;/32&lt;/code&gt;) to this list that you need.&lt;/p&gt;

&lt;p&gt;Next, add the following before the &lt;code class=&quot;highlighter-rouge&quot;&gt;999&lt;/code&gt; rule in &lt;code class=&quot;highlighter-rouge&quot;&gt;site::firewall&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$trusted_networks&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera_array&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'trusted_networks'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$trusted_networks&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;each&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;$network&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;|&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;firewall&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;003 allow all traffic from &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;${network}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;proto&lt;/span&gt;  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'all'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;source&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$network&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;action&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'accept'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This block of Puppet code uses Puppet's iteration feature from the future parser, so you'll need to make sure you have it enabled in &lt;code class=&quot;highlighter-rouge&quot;&gt;puppet.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now the next time you apply the Puppet configuration, a deny-by-default firewall will be enabled with explicit allow rules for each trusted network you specified in Hiera.&lt;/p&gt;

&lt;h4 id=&quot;my-hiera-structure&quot;&gt;My Hiera Structure&lt;/h4&gt;

&lt;p&gt;Over the past year+, I have standardized on the following Hiera hierarchy:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;s&quot;&gt;:hierarchy&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;node/%{::hostname}&quot;&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;location/%{::location}&quot;&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;role/%{::role}&quot;&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;common&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&quot;node&quot; is the hostname (or fqdn) of the node. While the idea of having node-specific settings goes against the &quot;Pets v Cattle&quot; argument, it's sometimes unavoidable. In my production environments, only a small percentage of nodes have their own node-specific settings, and even then it's maybe one or two values.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&quot;location&quot; is an arbitrary fact to group nodes:&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;honolulu &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /etc/facter/facts.d/location.txt
&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;maui &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /etc/facter/facts.d/location.txt
&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;location&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;dc1 &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /etc/facter/facts.d/location.txt&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;ul&gt;
  &lt;li&gt;&quot;role&quot; is another fact that matches the role that the node will have applied:&lt;/li&gt;
&lt;/ul&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;echo &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;role&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;puppet_server &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; /etc/facter/facts.d/role.txt&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p class=&quot;alert alert-error&quot;&gt;Be careful about using Facter to categorize nodes on the node itself! Let's say a node with a role of &lt;code class=&quot;highlighter-rouge&quot;&gt;dns&lt;/code&gt; was compromised and the intruder understood that they could change the role to &lt;code class=&quot;highlighter-rouge&quot;&gt;mysql&lt;/code&gt; by replacing the fact in &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/facter/facts.d/role.txt&lt;/code&gt;. On the next Puppet run, MySQL will be installed, which would apply sensitive information to the node that the intruder now has access to. It might also break your DNS server.&lt;/p&gt;

&lt;h4 id=&quot;what-goes-in-hiera-and-what-uses-hiera&quot;&gt;What Goes in Hiera and What Uses Hiera?&lt;/h4&gt;

&lt;p&gt;My personal rules of Hiera data are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Hiera data is used &lt;em&gt;only&lt;/em&gt; in profiles. If I write my own module that will be located in &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules&lt;/code&gt;, I still use class parameters.&lt;/li&gt;
  &lt;li&gt;Since I only use Hiera in profiles, that means all Hiera data is site-specific. So the deciding question becomes: &quot;what information needs stripped from this module that will allow it to work in another environment?&quot; That data is then moved to Hiera.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;finishing-up-the-puppet-server-role&quot;&gt;Finishing up the Puppet Server Role&lt;/h3&gt;

&lt;p&gt;Up until now, broad configurations that could be applied to any node have been used. Now we'll create a more specific role and profile to configure the Puppet Server.&lt;/p&gt;

&lt;p&gt;In order to do this, several modules will be needed:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-ruby&quot; data-lang=&quot;ruby&quot;&gt;&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'concat'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-concat'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'1.1.2'&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'inifile'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-inifile'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'792d35cdb48fc2cba08ab578c1b7bc42ef3a0ace'&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppet'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/jtopjian/puppet-puppet'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppetserver'&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppetdb'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-puppetdb'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'4.1.0'&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'postgresql'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-postgresql'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'4.1.0'&lt;/span&gt;

&lt;span class=&quot;n&quot;&gt;mod&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'stdlib'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:git&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'https://github.com/puppetlabs/puppetlabs-stdlib'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;ss&quot;&gt;:ref&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'4.4.x'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;ref&quot;&gt;Ref?&lt;/h4&gt;

&lt;p&gt;In production, I mark all modules in my &lt;code class=&quot;highlighter-rouge&quot;&gt;Puppetfile&lt;/code&gt; with a reference. This reference is the known working release or commit of that module. This means that if I ever want to test an updated module, I'll need to actually create a test environment, rebuild everything, and confirm it works. But the alternative of just going cowboy and deploying all new releases to the production environment will only cause a lot of pain (and downtime).&lt;/p&gt;

&lt;h4 id=&quot;creating-the-puppet-server-profile&quot;&gt;Creating the Puppet Server Profile&lt;/h4&gt;

&lt;p&gt;Create &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/manifests/profiles/puppet/server.pp&lt;/code&gt; with the following contents:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::profiles::puppet::server&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;

  &lt;span class=&quot;c&quot;&gt;# Hiera
&lt;/span&gt;  &lt;span class=&quot;nv&quot;&gt;$main_settings&lt;/span&gt;           &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::main'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$agent_settings&lt;/span&gt;          &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::agent'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$master_settings&lt;/span&gt;         &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::master'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$server_default_settings&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::server_default'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$puppet_package_ensure&lt;/span&gt;   &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::puppet_package_ensure'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$server_package_ensure&lt;/span&gt;   &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::server_package_ensure'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

  &lt;span class=&quot;c&quot;&gt;# Resources
&lt;/span&gt;  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::puppet'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;server&lt;/span&gt;                  &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;main_settings&lt;/span&gt;           &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$main_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;agent_settings&lt;/span&gt;          &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$agent_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;master_settings&lt;/span&gt;         &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$master_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;server_default_settings&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$server_default_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;puppet_package_ensure&lt;/span&gt;   &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$server_package_ensure&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppetdb'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppetdb::master::config'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This example profile is much more fleshed out than the previous &lt;code class=&quot;highlighter-rouge&quot;&gt;site::profiles::base&lt;/code&gt; profile to show the format I'm currently using in my production profiles.&lt;/p&gt;

&lt;p&gt;In the &lt;code class=&quot;highlighter-rouge&quot;&gt;common.yaml&lt;/code&gt; Hiera file, add this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;c1&quot;&gt;# Puppet&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::puppet_package_ensure&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;latest'&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::server_package_ensure&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;latest'&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::settings::main&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;server&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;puppet'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;parser&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;future'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ordering&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;manifest'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;pluginsync&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;logdir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/var/log/puppet'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;vardir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/var/lib/puppet'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ssldir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/var/lib/puppet/ssl'&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;rundir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/var/run/puppet'&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::settings::agent&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;certname&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;%{::fqdn}&quot;&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;show_diff&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;splay&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;false&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;configtimeout&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;360&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;usecacheonfailure&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;report&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;environment&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;%{::environment}&quot;&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::settings::server_default&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;JAVA_ARGS&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-Xms1g&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-Xmx1g&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;-XX:MaxPermSize=256m'&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;site::puppet::settings::master&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ca&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;true&lt;/span&gt;
  &lt;span class=&quot;na&quot;&gt;ssldir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/var/lib/puppet/ssl'&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;puppetdb::master::config::restart_puppet&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;false&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;You can see how each Hiera item matches the corresponding section in the profile except for &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetdb::master::config::restart_puppet&lt;/code&gt;. This is because &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetdb::master::config::restart_puppet&lt;/code&gt; is an Automatic Paramter Lookup. Declaring this in Hiera is the same as if you did:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'puppetdb::master::config'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;restart_puppet&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now create the &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/puppet/modules/site/manifests/profiles/puppet/agent.pp&lt;/code&gt; profile:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::profiles::puppet::agent&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;

  &lt;span class=&quot;c&quot;&gt;# Hiera
&lt;/span&gt;  &lt;span class=&quot;nv&quot;&gt;$main_settings&lt;/span&gt;         &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::main'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$agent_settings&lt;/span&gt;        &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::settings::agent'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
  &lt;span class=&quot;nv&quot;&gt;$puppet_package_ensure&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'site::puppet::puppet_package_ensure'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;

  &lt;span class=&quot;c&quot;&gt;# Resources
&lt;/span&gt;  &lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'::puppet'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;main_settings&lt;/span&gt;         &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$main_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;agent_settings&lt;/span&gt;        &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$agent_settings&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
    &lt;span class=&quot;py&quot;&gt;puppet_package_ensure&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;$server_package_ensure&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
  &lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;

&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Now build a role for the Puppet server:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;site::roles::puppet_server&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;contain&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;site::profiles::base&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;contain&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;site::profiles::puppet::server&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;With all of this in place, run Puppet:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet apply &lt;span class=&quot;nt&quot;&gt;--verbose&lt;/span&gt; /etc/puppet/manifests/site.pp&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;When everything has finished, you should now be able to switch to using &lt;code class=&quot;highlighter-rouge&quot;&gt;puppet agent&lt;/code&gt; instead of &lt;code class=&quot;highlighter-rouge&quot;&gt;puppet apply&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;puppet agent &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;--noop&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h4 id=&quot;my-opinion-on-automatic-parameter-lookups&quot;&gt;My Opinion on Automatic Parameter Lookups&lt;/h4&gt;

&lt;p&gt;I think they're a great idea, but ulimately they're too &quot;magical&quot; and unintuitive. There's no easy way to tell if they're being used by reading the Puppet manifests – you have to read both the manifests and Hiera data and correlate the two data sources.&lt;/p&gt;

&lt;p&gt;If there are any automatic lookups in my Hiera data, it's because I got lazy. It happens.&lt;/p&gt;

&lt;h2 id=&quot;committing-everything&quot;&gt;Committing Everything&lt;/h2&gt;

&lt;p&gt;A lot of work has been done here. To see all of the changes that were made, do the following:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nb&quot;&gt;cd&lt;/span&gt; /etc/puppet/modules/site
git status&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;All of this should be committed into &lt;code class=&quot;highlighter-rouge&quot;&gt;git&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;git add &lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
git commit &lt;span class=&quot;nt&quot;&gt;-m&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;Created base profile, puppet server profile and role, configured hiera.&quot;&lt;/span&gt;
git push &lt;span class=&quot;nt&quot;&gt;-u&lt;/span&gt; origin master&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This tutorial was an updated version of my 2013 &lt;a href=&quot;http://terrarum.net/blog/puppet-infrastructure.html&quot;&gt;Puppet Infrastructure&lt;/a&gt; tutorial. It described how to install and configure Puppet Server, PuppetDB, and Hiera as well as how to lay the foundation of a maintainable Puppet environment.&lt;/p&gt;

&lt;p&gt;In addition, I've included notes of my experiences learned over the past year with Puppet.&lt;/p&gt;

&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;/h2&gt;

&lt;p&gt;And as mentioned throughout this article:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;http://garylarizza.com/&quot;&gt;Gary Larizza's blog&lt;/a&gt;. Read &lt;em&gt;everything&lt;/em&gt; here. Twice.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.craigdunn.org/2012/05/239/&quot;&gt;Designing Puppet - Roles and Profiles&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://blog.keepingyouhonest.net/?p=443&quot;&gt;Puppet Roles and Profiles with a Simple Module Structure (part 2)&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www.slideshare.net/PuppetLabs/roles-talk&quot;&gt;Roles and Profiles slidedeck&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.puppetlabs.com/puppet/latest/reference/lang_containment.html&quot;&gt;Puppet Containment&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.puppetlabs.com/hiera/1/puppet.html#automatic-parameter-lookup&quot;&gt;Puppet Automatic Parameters Lookup&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
				<pubDate>Thu, 01 Jan 2015 00:00:00 -0700</pubDate>
				<link>http://terrarum.net/blog/puppet-infrastructure-2015.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/puppet-infrastructure-2015.html</guid>
			</item>
		
			<item>
				<title>Masterless Puppet with Vagrant</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;In a previous article, I described how to create a &lt;a href=&quot;http://terrarum.net/blog/masterless-puppet-with-capistrano.html&quot;&gt;Masterless Puppet workflow with Capistrano&lt;/a&gt;. In this article, I'll show how you can use Vagrant for Masterless Puppet.&lt;/p&gt;

&lt;p&gt;Since Capistrano works over SSH, it can be used to control any type of server – from bare metal to LXC containers. However, the workflow can be a bit complex. Vagrant provides a simpler workflow, but you're limited to the server providers compatible with Vagrant. Pros and Cons.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#prerequisites-and-dependencies&quot; id=&quot;markdown-toc-prerequisites-and-dependencies&quot;&gt;Prerequisites and Dependencies&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#vagrants-puppet-provisioners&quot; id=&quot;markdown-toc-vagrants-puppet-provisioners&quot;&gt;Vagrant's Puppet Provisioners&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#configuring-vagrant&quot; id=&quot;markdown-toc-configuring-vagrant&quot;&gt;Configuring Vagrant&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#vagrantfile&quot; id=&quot;markdown-toc-vagrantfile&quot;&gt;Vagrantfile&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#hiera&quot; id=&quot;markdown-toc-hiera&quot;&gt;Hiera&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#hierayaml&quot; id=&quot;markdown-toc-hierayaml&quot;&gt;hiera.yaml&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#commonyaml&quot; id=&quot;markdown-toc-commonyaml&quot;&gt;common.yaml&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#modules&quot; id=&quot;markdown-toc-modules&quot;&gt;Modules&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#defaultpp&quot; id=&quot;markdown-toc-defaultpp&quot;&gt;default.pp&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-final-directory-structure&quot; id=&quot;markdown-toc-the-final-directory-structure&quot;&gt;The Final Directory Structure&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#vagrant-up&quot; id=&quot;markdown-toc-vagrant-up&quot;&gt;Vagrant Up&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;prerequisites-and-dependencies&quot;&gt;Prerequisites and Dependencies&lt;/h2&gt;

&lt;p&gt;For this article, I'm using Vagrant 1.5.2 and &lt;a href=&quot;https://github.com/cloudbau/vagrant-openstack-plugin&quot;&gt;vagrant-openstack-plugin&lt;/a&gt; to enable Vagrant to launch virtual machines inside OpenStack. The virtual machine that Vagrant launches will be running Ubuntu 14.04.&lt;/p&gt;

&lt;p&gt;Despite these choices, though, the methods described in this article should be applicable to any Vagrant-based server as long as Vagrant's Puppet provisioner is compatible with it.&lt;/p&gt;

&lt;h2 id=&quot;vagrants-puppet-provisioners&quot;&gt;Vagrant's Puppet Provisioners&lt;/h2&gt;

&lt;p&gt;Out of the box, Vagrant supports provisioning servers with both &lt;a href=&quot;http://docs.vagrantup.com/v2/provisioning/puppet_apply.html&quot;&gt;Puppet Apply&lt;/a&gt; and &lt;a href=&quot;http://docs.vagrantup.com/v2/provisioning/puppet_agent.html&quot;&gt;Puppet Agent&lt;/a&gt;. Since this is a Masterless Puppet workflow, I'm going to be using Puppet Apply.&lt;/p&gt;

&lt;p&gt;Vagrant's documentation does an excellent job at describing how to use the Puppet Apply provisioner. The only gotcha I found is that Puppet &lt;em&gt;must&lt;/em&gt; be installed on the Vagrant-based virtual machine before Puppet Apply can be used. Maybe I just missed that bit in the documentation.&lt;/p&gt;

&lt;h2 id=&quot;configuring-vagrant&quot;&gt;Configuring Vagrant&lt;/h2&gt;

&lt;p&gt;For this article, I'll create a simple Vagrant environment that will create a virtual machine and have Apache installed onto it. The virtual machine will be launched inside OpenStack, have Puppet installed via the Shell provisioner, and have Apache installed by the Puppet Apply provisioner along with Hiera and the Puppetlabs Apache module.&lt;/p&gt;

&lt;h3 id=&quot;vagrantfile&quot;&gt;Vagrantfile&lt;/h3&gt;

&lt;p&gt;Below is the completed &lt;code class=&quot;highlighter-rouge&quot;&gt;Vagrantfile&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-ruby&quot; data-lang=&quot;ruby&quot;&gt;&lt;span class=&quot;nb&quot;&gt;require&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'vagrant-openstack-plugin'&lt;/span&gt;

&lt;span class=&quot;no&quot;&gt;Vagrant&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;configure&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;2&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# &quot;dummy&quot; box because we're using Glance&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;vm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;box&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;dummy&quot;&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# SSH&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;ssh&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;private_key_path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;/path/to/id_rsa&quot;&lt;/span&gt;

  &lt;span class=&quot;c1&quot;&gt;# Basic OpenStack options&lt;/span&gt;
  &lt;span class=&quot;c1&quot;&gt;# Note that an openrc file needs sourced before using&lt;/span&gt;
  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;vm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;provider&lt;/span&gt; &lt;span class=&quot;ss&quot;&gt;:openstack&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;username&lt;/span&gt;        &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;ENV&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'OS_USERNAME'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;api_key&lt;/span&gt;         &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;ENV&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'OS_PASSWORD'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;tenant&lt;/span&gt;          &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;no&quot;&gt;ENV&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'OS_TENANT_NAME'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;flavor&lt;/span&gt;          &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;sr&quot;&gt;/m1.tiny/&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;image&lt;/span&gt;           &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'cc4a5014-1a99-4d65-811a-8c0184b15dd7'&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;endpoint&lt;/span&gt;        &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;#{&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;ENV&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'OS_AUTH_URL'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;span class=&quot;si&quot;&gt;}&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;/tokens&quot;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;keypair_name&lt;/span&gt;    &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;home&quot;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;ssh_username&lt;/span&gt;    &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;ubuntu&quot;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;os&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;security_groups&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&quot;default&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;end&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;vm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;define&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;p.example.com&quot;&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;vm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;provision&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;shell&quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;ss&quot;&gt;:inline&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;lt;-&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;SHELL&lt;/span&gt;&lt;span class=&quot;sh&quot;&gt;
    apt-get update
    apt-get install -y puppet
&lt;/span&gt;&lt;span class=&quot;no&quot;&gt;  SHELL&lt;/span&gt;

  &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;vm&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;provision&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;puppet&quot;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;do&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;puppet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;puppet&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;hiera_config_path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;hiera.yaml&quot;&lt;/span&gt;
    &lt;span class=&quot;n&quot;&gt;puppet&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nf&quot;&gt;module_path&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;modules&quot;&lt;/span&gt;
  &lt;span class=&quot;k&quot;&gt;end&lt;/span&gt;

&lt;span class=&quot;k&quot;&gt;end&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;hiera&quot;&gt;Hiera&lt;/h3&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;Vagrantfile&lt;/code&gt; defines a Hiera configuration called &lt;code class=&quot;highlighter-rouge&quot;&gt;hiera.yaml&lt;/code&gt; located in the same directory as the &lt;code class=&quot;highlighter-rouge&quot;&gt;Vagrantfile&lt;/code&gt;.&lt;/p&gt;

&lt;h4 id=&quot;hierayaml&quot;&gt;hiera.yaml&lt;/h4&gt;

&lt;p&gt;Here's the contents of &lt;code class=&quot;highlighter-rouge&quot;&gt;hiera.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;:backends&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;yaml&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;:hierarchy&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;pi&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;common&quot;&lt;/span&gt;

&lt;span class=&quot;s&quot;&gt;:yaml&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;s&quot;&gt;:datadir&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;/vagrant/hiera'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;This tells hiera to look for a file called &lt;code class=&quot;highlighter-rouge&quot;&gt;common.yaml&lt;/code&gt; in the &lt;code class=&quot;highlighter-rouge&quot;&gt;/vagrant/hiera&lt;/code&gt; directory on the newly provisioned virtual machine. On the Vagrant-side, everything is in the same directory as the &lt;code class=&quot;highlighter-rouge&quot;&gt;Vagrantfile&lt;/code&gt;.&lt;/p&gt;

&lt;h4 id=&quot;commonyaml&quot;&gt;common.yaml&lt;/h4&gt;

&lt;p&gt;The contents of &lt;code class=&quot;highlighter-rouge&quot;&gt;common.yaml&lt;/code&gt; will simply be:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-yaml&quot; data-lang=&quot;yaml&quot;&gt;&lt;span class=&quot;nn&quot;&gt;---&lt;/span&gt;
&lt;span class=&quot;s&quot;&gt;apache::serveradmin&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;joe@example.com'&lt;/span&gt;
&lt;span class=&quot;na&quot;&gt;my_message&lt;/span&gt;&lt;span class=&quot;pi&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;Hello,&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt; &lt;/span&gt;&lt;span class=&quot;s&quot;&gt;World!'&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The first line sets the &lt;code class=&quot;highlighter-rouge&quot;&gt;serveradmin&lt;/code&gt; setting for Apache and the second line sets an arbitrary message for later use.&lt;/p&gt;

&lt;h3 id=&quot;modules&quot;&gt;Modules&lt;/h3&gt;

&lt;p&gt;The Vagrant Puppet Apply provisioner supports hosting modules. In the &lt;code class=&quot;highlighter-rouge&quot;&gt;Vagrantfile&lt;/code&gt;, the module path is configured as &lt;code class=&quot;highlighter-rouge&quot;&gt;./modules&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To install and configure Apache via Puppet, I'm going to use the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/apache&lt;/code&gt; module. This module requires the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/stdlib&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/concat&lt;/code&gt; modules, so in total, three modules will be used:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;mkdir modules
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;cd &lt;/span&gt;modules
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/puppetlabs/puppetlabs-apache apache
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/puppetlabs/puppetlabs-stdlib stdlib
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;git clone https://github.com/puppetlabs/puppetlabs-concat concat&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;defaultpp&quot;&gt;default.pp&lt;/h3&gt;

&lt;p&gt;The final component is an actual Puppet manifest. By default, Vagrant looks for &lt;code class=&quot;highlighter-rouge&quot;&gt;./manifests/default.pp&lt;/code&gt; which is exactly what I'll use:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-puppet&quot; data-lang=&quot;puppet&quot;&gt;&lt;span class=&quot;k&quot;&gt;include&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;apache&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;notify&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;'my_message'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt;
  &lt;span class=&quot;py&quot;&gt;message&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;hiera&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;'my_message'&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The first line installs and configures apache using the default settings from the &lt;code class=&quot;highlighter-rouge&quot;&gt;puppetlabs/apache&lt;/code&gt; module &lt;em&gt;except&lt;/em&gt; for the &lt;code class=&quot;highlighter-rouge&quot;&gt;serveradmin&lt;/code&gt; setting which was defined in Hiera as &lt;code class=&quot;highlighter-rouge&quot;&gt;joe@example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The second line prints a message stored in Hiera just as another way to show that data is successfully being pulled from Hiera.&lt;/p&gt;

&lt;h3 id=&quot;the-final-directory-structure&quot;&gt;The Final Directory Structure&lt;/h3&gt;

&lt;p&gt;With everything in place, the directory structure should look like this:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;tree &lt;span class=&quot;nt&quot;&gt;-L&lt;/span&gt; 2
&lt;span class=&quot;nb&quot;&gt;.&lt;/span&gt;
├── Vagrantfile
├── hiera
│   └── common.yaml
├── hiera.yaml
├── manifests
│   └── default.pp
└── modules
    ├── apache
    ├── concat
    └── stdlib&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;vagrant-up&quot;&gt;Vagrant Up&lt;/h2&gt;

&lt;p&gt;Now simply run &lt;code class=&quot;highlighter-rouge&quot;&gt;vagrant up&lt;/code&gt; and watch Vagrant go to work!&lt;/p&gt;

&lt;p&gt;If you're using a provider other than VirtualBox, like the OpenStack provider used in this article, don't forget to specify it on the command line:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;vagrant up &lt;span class=&quot;nt&quot;&gt;--provider&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;openstack&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This article showed how Vagrant can provide a Masterless Puppet workflow. Besides the non-default OpenStack provider used, all other features shown in this article are native to Vagrant. This makes it very easy and accessible to use Puppet within Vagrant. The downside, though, is that the servers you can provision in this way are limited to the providers available for Vagrant. If this is a blocker for you, consider using something like Capistrano and &lt;a href=&quot;http://terrarum.net/blog/masterless-puppet-with-capistrano.html&quot;&gt;building your own Masterless Puppet workflow&lt;/a&gt;.&lt;/p&gt;
</description>
				<pubDate>Fri, 25 Apr 2014 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/masterless-puppet-with-vagrant.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/masterless-puppet-with-vagrant.html</guid>
			</item>
		
			<item>
				<title>Building an LXC Server - Ubuntu 14.04 Edition</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;This article is a basic step-by-step HOWTO to create a server capable of hosting &lt;a href=&quot;http://linuxcontainers.org/&quot;&gt;LXC&lt;/a&gt;-based containers.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#prerequisites-and-dependencies&quot; id=&quot;markdown-toc-prerequisites-and-dependencies&quot;&gt;Prerequisites and Dependencies&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#apt-update&quot; id=&quot;markdown-toc-apt-update&quot;&gt;apt Update&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#open-vswitch&quot; id=&quot;markdown-toc-open-vswitch&quot;&gt;Open vSwitch&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#zfs&quot; id=&quot;markdown-toc-zfs&quot;&gt;ZFS&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#install-lxc&quot; id=&quot;markdown-toc-install-lxc&quot;&gt;Install LXC&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#configure-lxc&quot; id=&quot;markdown-toc-configure-lxc&quot;&gt;Configure LXC&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#back-to-zfs&quot; id=&quot;markdown-toc-back-to-zfs&quot;&gt;Back to ZFS&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#using-lxc&quot; id=&quot;markdown-toc-using-lxc&quot;&gt;Using LXC&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#creating-a-container&quot; id=&quot;markdown-toc-creating-a-container&quot;&gt;Creating a Container&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#testing-zfs-deduplication&quot; id=&quot;markdown-toc-testing-zfs-deduplication&quot;&gt;Testing ZFS Deduplication&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#port-forwarding&quot; id=&quot;markdown-toc-port-forwarding&quot;&gt;Port Forwarding&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#lxc-nat&quot; id=&quot;markdown-toc-lxc-nat&quot;&gt;lxc-nat&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;prerequisites-and-dependencies&quot;&gt;Prerequisites and Dependencies&lt;/h2&gt;

&lt;p&gt;This server will be using Ubuntu 14.04. As 14.04 has just been released, some steps might change in the future.&lt;/p&gt;

&lt;h3 id=&quot;apt-update&quot;&gt;apt Update&lt;/h3&gt;

&lt;p&gt;First, make sure all of the base packages are up to date:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get update
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get dist-upgrade&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;open-vswitch&quot;&gt;Open vSwitch&lt;/h3&gt;

&lt;p&gt;The previous version of this article advocated Open vSwitch. I have since stopped using OVS as I've been able to configure Linux Bridge with the exact same features by using newer kernels.&lt;/p&gt;

&lt;h3 id=&quot;zfs&quot;&gt;ZFS&lt;/h3&gt;

&lt;p&gt;The newer LXC builds support ZFS as a backing store. This means that deduplication, compression, and snapshotting can all be taken advantage of. To install ZFS on Ubuntu 14.04, do:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-add-repository ppa:zfs-native/daily
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get update
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get install ubuntu-zfs&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;install-lxc&quot;&gt;Install LXC&lt;/h2&gt;

&lt;p&gt;Ubuntu 14.04 provides LXC 1.0.3, which is the latest version. I'm not sure if Ubuntu 14.04 will continue providing up-to-date versions of LXC, given it being an LTS release. If it falls behind, it might be beneficial to switch to the &lt;code class=&quot;highlighter-rouge&quot;&gt;ubuntu-lxc/daily&lt;/code&gt; ppa.&lt;/p&gt;

&lt;p&gt;To install LXC, just do:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;apt-get install lxc&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;configure-lxc&quot;&gt;Configure LXC&lt;/h2&gt;

&lt;h3 id=&quot;back-to-zfs&quot;&gt;Back to ZFS&lt;/h3&gt;

&lt;p&gt;By default, LXC will look for a zpool titled &lt;code class=&quot;highlighter-rouge&quot;&gt;lxc&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zpool create &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; tank /dev/vdc&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Make sure deduplication and compression are turned on:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zfs &lt;span class=&quot;nb&quot;&gt;set &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;dedup&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;on tank
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zfs &lt;span class=&quot;nb&quot;&gt;set &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;compression&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;on tank&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;LXC can use ZFS's native snapshot features. To make sure you can see snapshots when running &lt;code class=&quot;highlighter-rouge&quot;&gt;zfs list&lt;/code&gt;, do the following:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zpool &lt;span class=&quot;nb&quot;&gt;set &lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;listsnapshots&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;on tank&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;To configure LXC to use ZFS as the backing store and set the default LXC path, add the following to &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/lxc/lxc.conf&lt;/code&gt;:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;lxc.lxcpath &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; /tank/lxc/containers
lxc.bdev.zfs.root &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; tank/lxc/containers&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Ensure /tank/lxc/containers, or whichever path you choose, exists:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zfs create tank/lxc
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zfs create tank/lxc/containers&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;using-lxc&quot;&gt;Using LXC&lt;/h2&gt;

&lt;h3 id=&quot;creating-a-container&quot;&gt;Creating a Container&lt;/h3&gt;

&lt;p&gt;Create the first container by doing:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;lxc-create &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; ubuntu &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test1 &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; zfs &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-S&lt;/span&gt; /root/.ssh/id_rsa.pub&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;When the command has finished, you'll see that LXC has created a new ZFS partition:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zfs list
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;df &lt;span class=&quot;nt&quot;&gt;-h&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h3 id=&quot;testing-zfs-deduplication&quot;&gt;Testing ZFS Deduplication&lt;/h3&gt;

&lt;p&gt;You can see the ZFS dedup stat by doing:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  99.5G   186M  99.3G     0%  1.01x  ONLINE  -&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;With that number in mind, create a second container:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;lxc-create &lt;span class=&quot;nt&quot;&gt;-t&lt;/span&gt; ubuntu &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; test2 &lt;span class=&quot;nt&quot;&gt;-B&lt;/span&gt; zfs &lt;span class=&quot;nt&quot;&gt;--&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-S&lt;/span&gt; /root/.ssh/id_rsa.pub&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;When the command has finished, review the ZFS stat:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;&lt;span class=&quot;nb&quot;&gt;sudo &lt;/span&gt;zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  99.5G   187M  99.3G     0%  2.02x  ONLINE  -&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;The dedup ratio doubled. This effectively means that no new disk space was consumed when the new container was created!&lt;/p&gt;

&lt;h2 id=&quot;port-forwarding&quot;&gt;Port Forwarding&lt;/h2&gt;

&lt;p&gt;By default, LXC uses the &lt;code class=&quot;highlighter-rouge&quot;&gt;veth&lt;/code&gt; networking mode for containers. This is the most robust networking mode. Other modes exist and I &lt;em&gt;highly&lt;/em&gt; recommend &lt;a href=&quot;http://containerops.org/2013/11/19/lxc-networking/&quot;&gt;this&lt;/a&gt; article for a detailed look at them.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;veth&lt;/code&gt; mode can be thought of as a form of NAT and the LXC server is now acting as a NAT'd gateway for all of the containers running on the server. If you want the containers to be accessible from the public internet, you will need to do some port forwarding.&lt;/p&gt;

&lt;h3 id=&quot;lxc-nat&quot;&gt;lxc-nat&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: Zan Loy has made a much better version of the &lt;code class=&quot;highlighter-rouge&quot;&gt;lxc-nat&lt;/code&gt; script mentioned below. The improved version can be found &lt;a href=&quot;https://gist.github.com/zanloy/a5648941383d519bb9c4&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update 2&lt;/strong&gt;: Daniël created a Python version of &lt;code class=&quot;highlighter-rouge&quot;&gt;lxc-nat&lt;/code&gt; which can be found &lt;a href=&quot;https://github.com/daniel5gh/lxc-nat-py&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have put together a small script called &lt;a href=&quot;https://github.com/jtopjian/lxc-nat&quot;&gt;lxc-nat&lt;/a&gt; that will configure port forwarding based on entries made in &lt;code class=&quot;highlighter-rouge&quot;&gt;/etc/lxc/lxc-nat.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For example, if you have Apache running in a container called &lt;code class=&quot;highlighter-rouge&quot;&gt;www&lt;/code&gt;, create the following entry:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;10.0.0.1:80 -&amp;gt; www:80&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;p&gt;Or if you want to access &lt;code class=&quot;highlighter-rouge&quot;&gt;www&lt;/code&gt; via SSH:&lt;/p&gt;

&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot; data-lang=&quot;bash&quot;&gt;10.0.0.1:2201 -&amp;gt; www:22&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This article showed the steps used to configure a server to host LXC-based containers on a ZFS storage backend.&lt;/p&gt;
</description>
				<pubDate>Sat, 19 Apr 2014 00:00:00 -0600</pubDate>
				<link>http://terrarum.net/blog/building-an-lxc-server-1404.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/building-an-lxc-server-1404.html</guid>
			</item>
		
			<item>
				<title>Masterless Puppet with Capistrano</title>
				<description>&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;This article will describe how to create a masterless Puppet workflow with Capistrano. A MP workflow uses Puppet, but without a Puppet Master server – hence master&lt;em&gt;less&lt;/em&gt;. The benefit of this workflow is a reduction in the amount of required services needed to run a Puppet environment: the Puppet Master, PuppetDB, and certificate management.&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#capistrano-installation-and-configuration&quot; id=&quot;markdown-toc-capistrano-installation-and-configuration&quot;&gt;Capistrano Installation and Configuration&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#capistrano-modules&quot; id=&quot;markdown-toc-capistrano-modules&quot;&gt;Capistrano Modules&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#utils&quot; id=&quot;markdown-toc-utils&quot;&gt;Utils&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#hiera&quot; id=&quot;markdown-toc-hiera&quot;&gt;Hiera&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#puppet&quot; id=&quot;markdown-toc-puppet&quot;&gt;Puppet&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;capistrano-installation-and-configuration&quot;&gt;Capistrano Installation and Configuration&lt;/h2&gt;

&lt;p&gt;The first step is to install and configure &lt;a href=&quot;http://capistranorb.com/&quot;&gt;Capistrano&lt;/a&gt;. The documentation found on Capistrano's homepage is excellent.&lt;/p&gt;

&lt;h2 id=&quot;capistrano-modules&quot;&gt;Capistrano Modules&lt;/h2&gt;

&lt;p&gt;I have written a series of &quot;modules&quot; for Capistrano that can be found &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules&quot;&gt;here&lt;/a&gt;. &quot;Modules&quot; is quoted because there is no official pluggable set of libraries for Capistrano called modules. These are just tasks and libraries that I've tried to make easily distributable.&lt;/p&gt;

&lt;p&gt;The README files found in each of the modules should be clear enough for you to successfully install, configure, and use each module. However, if something is not clear, please let me know.&lt;/p&gt;

&lt;h3 id=&quot;utils&quot;&gt;Utils&lt;/h3&gt;

&lt;p&gt;The first module required is the &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules/tree/master/utils&quot;&gt;utils&lt;/a&gt; module. This module contains some helper functions for rendering templates and uploading files.&lt;/p&gt;

&lt;h3 id=&quot;hiera&quot;&gt;Hiera&lt;/h3&gt;

&lt;p&gt;The next module to configure is the &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules/tree/master/hiera&quot;&gt;hiera&lt;/a&gt; module.&lt;/p&gt;

&lt;p&gt;By the time you have finished configuring this module, you should have a set of servers listed in a stage's data source file similar to &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules/tree/master/hiera#capistranostagingyaml&quot;&gt;this example&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;puppet&quot;&gt;Puppet&lt;/h3&gt;

&lt;p&gt;The final module to configure is the &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules/tree/master/puppet&quot;&gt;puppet&lt;/a&gt; module. The README file for this module has a specific section for a &lt;a href=&quot;https://github.com/jtopjian/capistrano-modules/tree/master/puppet#masterless-puppet&quot;&gt;masterless Puppet workflow&lt;/a&gt;. Again, if the documentation is not clear, please let me know.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This article covered the different components required for using a masterless Puppet workflow with Capistrano. You should now be able to easily use Puppet to configure remote servers without the need of a Puppet Master.&lt;/p&gt;
</description>
				<pubDate>Mon, 24 Feb 2014 00:00:00 -0700</pubDate>
				<link>http://terrarum.net/blog/masterless-puppet-with-capistrano.html</link>
				<guid isPermaLink="true">http://terrarum.net/blog/masterless-puppet-with-capistrano.html</guid>
			</item>
		
	</channel>
</rss>
