Docker Symfony Development Box with Data Containers

Data containers are containers where you define a path or folder to be mounted onto other containers. The data here can be read only or writeable. This means that for when you need a swap or temp storage for your projects you can use these containers since any other containers that are perhaps mounted on your host will not be writeable.

So we start by defining the cache and logs container as follows:

// cache_container
FROM busybox
RUN mkdir -p /var/cache/repay/dev
VOLUME /var/cache/repay
ENTRYPOINT chmod -R 777 /var/cache/repay; /usr/bin/tail -f /dev/null

And:

// logs_container
FROM busybox
RUN mkdir -p /var/log/repay/
VOLUME /var/log/repay
ENTRYPOINT chmod -R 777 /var/log/repay; /usr/bin/tail -f /dev/null

Here is how you piece everything together. You build those containers, notice that they have entry points so that after running that command gets invoked setting the right permissions for those folders when they get ran.

After the symfony box gets build you can run the whole thing and attach/mount the data containers to your main host shared container where the code is:

!#/usr/bin/env bash
 
# troubleshoot
# docker exec -i -t repay bash
 
# kill all previous containers
docker rm -f $(docker ps -a -q)
 
docker build --rm -t "cache_container:v1" docker/cache_container
docker build --rm -t "logs_container:v1" docker/logs_container
 
docker run -d --name repay_cache_container cache_container:v1
docker run -d --name repay_logs_container logs_container:v1
 
# build symfony project box
docker build -rm -t repay .
 
# run symfony container
docker run \
    -v $PWD:/srv \
    -e DB_NAME=repay \
    -e INIT=bin/reload \
    --volumes-from repay_cache_container:rw \
    --volumes-from repay_logs_container:rw \
    --name repay_development_webserver \
    -itP repay
</bash>
 
This can work thanks to adding in the kernel a couple of methods:
 
<pre lang="php">
    /**
     * {@inheritdoc}
     */
    public function getCacheDir()
    {
        return '/var/cache/repay/'.$this->environment;
    }
 
    /**
     * {@inheritdoc}
     */
    public function getLogDir()
    {
        return '/var/log/repay';
    }

I grabbed the top Dockerfile from an adaptation of ubermuda repo for symfony docker:

FROM debian:jessie
 
ENV DEBIAN_FRONTEND noninteractive
 
RUN apt-get update -y
RUN apt-get install -y nginx php5-fpm php5-mysqlnd php5-cli mysql-server supervisor
 
RUN sed -e 's/;daemonize = yes/daemonize = no/' -i /etc/php5/fpm/php-fpm.conf
RUN sed -e 's/;listen\.owner/listen.owner/' -i /etc/php5/fpm/pool.d/www.conf
RUN sed -e 's/;listen\.group/listen.group/' -i /etc/php5/fpm/pool.d/www.conf
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
 
ADD vhost.conf /etc/nginx/sites-available/default
ADD supervisor.conf /etc/supervisor/conf.d/supervisor.conf
ADD init.sh /init.sh
RUN chmod 777 /init.sh
 
EXPOSE 80
 
VOLUME ["/srv"]
WORKDIR /srv
 
CMD ["/usr/bin/supervisord"]

Now you can enjoy!

My SSH KeyChain And Phansible

I built a box with phansible and was getting this error:

~ vagrant provision                                                                                 Luiss-MacBook-Pro-3 [7:39:07]
==> default: Running provisioner: ansible...
 
PLAY [all] ********************************************************************
 
GATHERING FACTS ***************************************************************
fatal: [192.168.56.108] => SSH encountered an unknown error during the connection. We recommend you re-run the command using -vvvv, which will enable SSH debugging output to help diagnose the issue
 
TASK: [init | Update apt] *****************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
 
 
PLAY RECAP ********************************************************************
           to retry, use: --limit @/Users/cordoval/playbook.retry
 
192.168.56.108             : ok=0    changed=0    unreachable=1    failed=0
 
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

The reason was that my host was not connecting to the vagrant box because I had several things configured in my ~/.ssh/config. So what I did was:

# added entry to ~/.ssh/config
Host 192.168.56.108
  HostName 192.168.56.108
  User vagrant
  IdentitiesOnly yes
  IdentityFile ~/.ssh/vagrant
# added in /etc/hosts
192.168.56.108  project.local

And you can check with ssh-add -L.

Try to test with ssh -v 192.168.56.108.

Then you will be able to successfully run the rest of your playbooks:

~ vagrant provision                                                                                 Luiss-MacBook-Pro-3 [7:40:45]
==> default: Running provisioner: ansible...
 
PLAY [all] ********************************************************************
 
GATHERING FACTS ***************************************************************
ok: [192.168.56.108]
 
TASK: [init | Update apt] *****************************************************
ok: [192.168.56.108]
 
TASK: [init | Install System Packages] ****************************************
changed: [192.168.56.108] => (item=curl,wget,python-software-properties)
 
TASK: [init | Add ppa Repository] *********************************************
changed: [192.168.56.108]

Encouragements!

Plug Dev Flow with Gush and Bldr into Vagrant

I recently added Gush and Bldr to a vagrant box generated by puphpet.com. I also plugged librarian instead of the regular git tracking of the whole module set. This is a more pro way than to keep track of 2k+ files that are purely git repositories and that are already versioned.

So here are the sparse instructions. For Gush, this is inside vagrant/puphpet/puppet/nodes/gush.pp, yes you just add a gush.pp puppet file into your nodes:

  $gush_github   = 'https://github.com/gushphp/gush.git'
  $gush_location = '/usr/share/gush'
 
  exec { 'delete-gush-path-if-not-git-repo':
    command => "rm -rf ${gush_location}",
    onlyif  => "test ! -d ${gush_location}/.git",
    path    => [ '/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/' ],
  } ->
  vcsrepo { $gush_location:
    ensure   => present,
    provider => git,
    source   => $gush_github,
    revision => 'master',
  } ->
  composer::exec { 'gush':
    cmd     => 'install',
    cwd     => $gush_location,
    require => Vcsrepo[$gush_location],
  } ->
  file { 'symlink gush':
    ensure => link,
    path   => '/usr/bin/gush',
    target => "${gush_location}/bin/gush",
  }

This will nicely plug gush into your vagrant and you just have to reprovision with:

vagrant provision

I also provisioned the dev popular script but via the Vagrantfile, if you think is best to provision this custom file via a pp file then set a good justification for it on the comments. I think it is ok for now and here it is:

// Vagrantfile

   if !data['vagrant']['host'].nil?
     config.vagrant.host = data['vagrant']['host'].gsub(':', '').intern
   end
+
+  config.vm.provision :shell, :path => 'puphpet/shell/dev.sh'
 end

And place the dev file inside ${VAGRANT_CORE_FOLDER}/files/dot/dev:

#!/usr/bin/env bash

bldr=$(which bldr)
 
if [ -x "$bldr" ] ; then
    $bldr install
    $bldr run $1
else
    if [ ! -f ./bldr.phar ]; then
        curl -sS http://bldr.io/installer | php
    fi
 
    ./bldr.phar install
    ./bldr.phar run $1
fi

And also the shell under ‘puphpet/shell/dev.sh':

#!/bin/bash

VAGRANT_CORE_FOLDER=$(cat '/.puphpet-stuff/vagrant-core-folder.txt')

sudo su
chmod +x "${VAGRANT_CORE_FOLDER}/files/dot/dev"
cp "${VAGRANT_CORE_FOLDER}/files/dot/dev" '/usr/local/bin/dev'

The last trick is with puppet-librarian which you can ruby install it. This tool is like a composer but for modules. So all that is under vagrant/puphpet/puppet/modules should be added to .gitignore on your project and you just need to run puppet-librarian install like in composer.

/vagrant/puphpet/puppet/.librarian
/vagrant/puphpet/puppet/.tmp
/vagrant/puphpet/puppet/modules

In order to get the right sha1 to which your working puppet was working then just switch to the right Puppetfile replacing it with:

forge "https://forgeapi.puppetlabs.com"
 
mod 'puppetlabs/apache',
    :git => 'https://github.com/puphpet/puppetlabs-apache.git',
    :ref => 'f12483b'
mod 'puppetlabs/apt',
    :git => 'https://github.com/puppetlabs/puppetlabs-apt.git',
    :ref => '1.4.2'
mod 'puphpet/beanstalkd',
    :git => 'https://github.com/puphpet/puppet-beanstalkd.git',
    :ref => '5a530ff'
mod 'tPl0ch/composer',
    :git => 'https://github.com/tPl0ch/puppet-composer.git',
    :ref => '1.2.1'
mod 'puppetlabs/concat',
    :git => 'https://github.com/puppetlabs/puppetlabs-concat.git',
    :ref => '1.1.0'
mod 'elasticsearch/elasticsearch',
    :git => 'https://github.com/puphpet/puppet-elasticsearch.git',
    :ref => '0.2.2'
mod 'garethr/erlang',
    :git => 'https://github.com/garethr/garethr-erlang.git',
    :ref => '91d8ec73c3'
mod 'puppetlabs/firewall',
    :git => 'https://github.com/puppetlabs/puppetlabs-firewall.git',
    :ref => '1.1.1'
mod 'actionjack/mailcatcher',
    :git => 'https://github.com/puphpet/puppet-mailcatcher.git',
    :ref => 'dcc8c3d357'
mod 'puppetlabs/mongodb',
    :git => 'https://github.com/puppetlabs/puppetlabs-mongodb.git',
    :ref => '0.8.0'
mod 'puppetlabs/mysql',
    :git => 'https://github.com/puppetlabs/puppetlabs-mysql.git',
    :ref => '2.3.1'
mod 'jfryman/nginx',
    :git => 'https://github.com/jfryman/puppet-nginx.git',
    :ref => 'v0.0.9'
mod 'puppetlabs/ntp',
    :git => 'https://github.com/puppetlabs/puppetlabs-ntp.git',
    :ref => '3.0.4'
mod 'puphpet/php',
    :git => 'https://github.com/puphpet/puppet-php.git',
    :ref => 'a1dad7828d'
mod 'puppetlabs/postgresql',
    :git => 'https://github.com/puppetlabs/puppetlabs-postgresql.git',
    :ref => '3.3.3'
mod 'puphpet/puphpet',
    :git => 'https://github.com/puphpet/puppet-puphpet.git',
    :ref => '9253681'
mod 'example42/puppi',
    :git => 'https://github.com/example42/puppi.git',
    :ref => 'v2.1.9'
mod 'daenney/pyenv',
    :git => 'https://github.com/puphpet/puppet-pyenv.git',
    :ref => '062ae72'
mod 'puppetlabs/rabbitmq',
    :git => 'https://github.com/puppetlabs/puppetlabs-rabbitmq.git',
    :ref => '5ce33f4968'
mod 'puphpet/redis',
    :git => 'https://github.com/puphpet/puppet-redis.git',
    :ref => 'd9b3b23b0c'
mod 'maestrodev/rvm'
mod 'puppetlabs/sqlite',
    :git => 'https://github.com/puppetlabs/puppetlabs-sqlite.git',
    :ref => 'v0.0.1'
mod 'petems/swap_file',
    :git => 'https://github.com/petems/puppet-swap_file.git',
    :ref => '39582afda5'
mod 'nanliu/staging',
    :git => 'https://github.com/nanliu/puppet-staging.git',
    :ref => '0.4.0'
mod 'ajcrowe/supervisord',
    :git => 'https://github.com/puphpet/puppet-supervisord.git',
    :ref => '17643f1'
mod 'puppetlabs/stdlib'
mod 'puppetlabs/vcsrepo',
    :git => 'https://github.com/puppetlabs/puppetlabs-vcsrepo.git',
    :ref => '0.2.0'
mod 'example42/yum',
    :git => 'https://github.com/example42/puppet-yum.git',
    :ref => 'v2.1.10'

That is it! Encouragements in all good! and give retweets please!

GushPHP.org Site now Bears Documentation: Contribute Linking Other Commands!

Tonight I spend sometime on the https://github.com/gushphp/gush-site repository which is the site source code that is served over http://gushphp.org.

The site is built on sculpin and for a long time I was struggling with solving the problem of getting the command documentation over to that page in a more automated way.

So i connected the documentation link on the top navigation bar to a documentation page.

Screenshot 2014-09-26 05.17.37

The documentation page right now bears a couple of linked commands. However when you click on one of them for instance you will land into the commands pages. Right now they need some contributions to beautify them, but the essence is there:

http://gushphp.org/commands/branch_changelog/

Screenshot 2014-09-26 05.19.28

The generation takes advantage of a script located in the gushphp/gush repo, generate_docu. This script writes .md files formatted so that they are ready to be placed under source/commands folder under the gushphp/gush-site repository.

Soon we will connect all the commands, we need help now just making it look prettier and complete the links, will you help us please!

Thanks!

Doctrine Migrations with Schema API without Symfony: Symfony CMF SeoBundle Sylius Example

Usually when we have a project in Symfony we rely on doctrine migrations bundle. The problem with this approach is often that is too magic and have some misunderstandings on how to do proper migrations. There was a rewrite of the doctrine/migrations package by @beberlei sometime ago. Even though it is a pending PR to this day, I believe several other frameworks and community can benefit from using the component standalone with doctrine DBAL or other.

Here I will show how I used this component to do my migrations using the Schema API from Doctrine. Let’s start!

First let’s require the puppy:

    // inside your composer.json add and run update doctrine/migrations
    "doctrine/migrations":  "dev-Rewrite@dev"

Let’s dive into the command that will run things for us. Yes is a Symfony command and if you don’t want it to become container aware, you can easily inject the migrations service. However for those locked up in the den here is a version coupled:

<?php
 
namespace Vendor\DieBundle\Command;
 
use Doctrine\Migrations\Migrations;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputArgument;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
 
class MigrationCommand extends ContainerAwareCommand
{
    const COMMAND_SUCCESS = 0;
    const COMMAND_FAILURE = 1;
 
    /** @var  Migrations */
    protected $migrations;
 
    protected function configure()
    {
        $this
            ->setName('doctrine:migrations')
            ->setDescription('migrates database')
            ->addArgument('step', InputArgument::REQUIRED, 'step')
            ->setHelp(<<<EOT
This command has a mandatory argument that needs to be either: repair, boot, migrate or info.
 
EOT
            )
        ;
    }
 
    protected function execute(InputInterface $input, OutputInterface $output)
    {
        $this->migrations = $this->getContainer()->get('vendor_migrations');
 
        switch ($step = $input->getArgument('step')) {
            case 'repair':
                $this->migrations->repair();
                break;
            case 'boot':
                try {
                    $this->migrations->initializeMetadata();
                } catch (\Exception $e) {
                    $output->writeln('Already booted/initialized.');
                }
                break;
            case 'migrate':
                $this->migrations->migrate();
                break;
            case 'info':
            default:
                $status = $this->migrations->getInfo();
                $output->writeln($status->isInitialized() ? 'is initialized' : 'is not initialized');
                $output->writeln('Executed migrations:');
                foreach ($status->getExecutedMigrations() as $migration) {
                    /** @var \Doctrine\Migrations\MigrationInfo $migration */
                    $output->writeln(sprintf('  - %s', $migration->getVersion()));
                }
                $output->writeln('Outstanding migrations:');
                foreach ($status->getOutstandingMigrations() as $migration) {
                    $output->writeln(sprintf('  - %s', $migration->getVersion()));
                }
                break;
        }
 
        $output->writeln(sprintf('Executed step %s', $step));
 
        return self::COMMAND_SUCCESS;
    }
}

This command uses certain services coming from the package that we have not seen yet. It allows us to run the migrate command. The initialization thing is just to plug a table for migrations tracking. The info outputs nicely a list of the migrations from this table showing executed migration versions and also outstanding migrations.

Let’s take a deep look into the extension that sets these services for us (of course this is coded added by me to accomplish setup):

    // inside your extension class
class VendorDieExtension extends Extension
{
    /**
     * {@inheritDoc}
     */
    public function load(array $configs, ContainerBuilder $container)
    {
        // ...
        $this->defineMigrationsServiceAndConfiguration($container);
    }
 
    private function defineMigrationsServiceAndConfiguration(ContainerBuilder $container)
    {
        $migrationsArray = array(
            'db' => array(
                'driver' => $container->getParameter('database_driver'),
                'host' => $container->getParameter('database_host'),
                'port' => $container->getParameter('database_port'),
                'dbname' => $container->getParameter('database_name'),
                'user' => $container->getParameter('database_user'),
                'password' => $container->getParameter('database_password')
            ),
            'migrations' => array(
                'script_directory' => $container->getParameter('kernel.root_dir').'/Migrations/',
                'allow_init_on_migrate' => true,
                'validate_on_migrate' => true,
                'allow_out_of_order_migrations' => false
            )
        );
 
        $container->setParameter('vendor_migrations_array', $migrationsArray);
 
        $factory = new Definition('Doctrine\Migrations\DBAL\Factory');
 
        $container->setDefinition('vendor_migrations_factory', $factory);
 
        $migrations = new Definition('Doctrine\Migrations\Migrations');
        $migrations
            ->setFactoryService('vendor_migrations_factory')
            ->setFactoryMethod('createFromArray')
            ->addArgument($migrationsArray)
        ;
 
        $container->setDefinition('vendor_migrations', $migrations);
    }

With this we have created the service vendor_migrations which we call inside our command. All is set and we can see that we have specified where will our migrations be in our project. So let’s take a look at a sample migration:

<?php
 
use Doctrine\DBAL\Schema\Column;
use Doctrine\DBAL\Schema\ForeignKeyConstraint;
use Doctrine\DBAL\Schema\Table;
use Doctrine\DBAL\Schema\TableDiff;
use Doctrine\DBAL\Types\Type;
use Doctrine\Migrations\DBAL\DBALMigration;
use Doctrine\DBAL\Connection;
use Doctrine\DBAL\Schema\Index;
 
class V0002_from_prod_23092014_to_seo implements DBALMigration
{
    public function migrate(Connection $connection)
    {
        $schemaManager = $connection->getSchemaManager();
 
        $seoTable = new Table('cmf_seo_metadata');
        $seoTable->addColumn('id', 'integer', array('length' => 10000, 'notnull' => true, 'autoincrement' => true));
        $seoTable->addIndex(array('id'), 'id');
        $seoTable->addColumn('title', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('metaDescription', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('metaKeywords', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('originalUrl', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('extraNames', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $seoTable->addColumn('extraProperties', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $seoTable->addColumn('extraHttp', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $seoTable->setPrimaryKey(array('id'));
 
        $schemaManager->createTable($seoTable);
 
        $tableDiff = new TableDiff('vendor_media_items', array(new Column('seo_metadata', Type::getType('integer'), array('notnull' => false))));
        $schemaManager->alterTable($tableDiff);
 
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_ADE411B7AEB39536');
        $schemaManager->createConstraint($keyConstraint, 'vendor_media_items');
 
        $schemaManager->createIndex(new Index('UNIQ_ADE411B7AEB39536', array('seo_metadata'), true), 'vendor_media_items');
 
        $tableDiff = new TableDiff('sylius_product', array(new Column('seo_metadata', Type::getType('integer'), array('notnull' => false))));
        $schemaManager->alterTable($tableDiff);
 
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_677B9B74AEB39536');
        $schemaManager->createConstraint($keyConstraint, 'sylius_product');
 
        $schemaManager->createIndex(new Index('UNIQ_677B9B74AEB39536', array('seo_metadata'), true), 'sylius_product');
 
        $tableDiff = new TableDiff('sylius_taxon', array(
                new Column('seo_metadata', Type::getType('integer'), array('notnull' => false)),
                new Column('meta_title', Type::getType('string'), array('notnull' => false)),
                new Column('meta_description', Type::getType('string'), array('notnull' => false)),
            )
        );
        $schemaManager->alterTable($tableDiff);
 
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_CFD811CAAEB39536');
        $schemaManager->createConstraint($keyConstraint, 'sylius_taxon');
 
        $schemaManager->createIndex(new Index('UNIQ_CFD811CAAEB39536', array('seo_metadata'), true), 'sylius_taxon');
 
        // forgotten change from before
        $tableDiff = new TableDiff('sylius_variant', array(), array(), array(new Column('revision', Type::getType('integer'))));
        $schemaManager->alterTable($tableDiff);
    }
}

The migration is from a project using Sylius and the Symfony CMF SEO Bundle. That is why the names look familiar :). It is a real working example!

Don’t be scared, this above is the translation of the SQL output gotten from app/console doctrine:schema:update –dump-sql, but digested with the Schema API from doctrine. I actually finds it makes more sense to write this migration as we develop or add model persistence in a project and just verifying this with the dumper. I hardly found good documentation on the API but finally after some troubleshooting feel a lot comfortable with this API and also the new version of the doctrine/migration library.

The migration runs like a charm and tested on development we can be sure it will work well on production. There is a suggestion to use tools external to PHP however this is a no go since projects like wordpress or others would want their plugins to do these migrations. So this is a good option!

Encouragements in all good, and please retweet to support me writing.

Thanks!