Plug Dev Flow with Gush and Bldr into Vagrant

I recently added Gush and Bldr to a vagrant box generated by I also plugged librarian instead of the regular git tracking of the whole module set. This is a more pro way than to keep track of 2k+ files that are purely git repositories and that are already versioned.

So here are the sparse instructions. For Gush, this is inside vagrant/puphpet/puppet/nodes/gush.pp, yes you just add a gush.pp puppet file into your nodes:

  $gush_github   = ''
  $gush_location = '/usr/share/gush'
  exec { 'delete-gush-path-if-not-git-repo':
    command => "rm -rf ${gush_location}",
    onlyif  => "test ! -d ${gush_location}/.git",
    path    => [ '/bin/', '/sbin/', '/usr/bin/', '/usr/sbin/' ],
  } ->
  vcsrepo { $gush_location:
    ensure   => present,
    provider => git,
    source   => $gush_github,
    revision => 'master',
  } ->
  composer::exec { 'gush':
    cmd     => 'install',
    cwd     => $gush_location,
    require => Vcsrepo[$gush_location],
  } ->
  file { 'symlink gush':
    ensure => link,
    path   => '/usr/bin/gush',
    target => "${gush_location}/bin/gush",

This will nicely plug gush into your vagrant and you just have to reprovision with:

vagrant provision

I also provisioned the dev popular script but via the Vagrantfile, if you think is best to provision this custom file via a pp file then set a good justification for it on the comments. I think it is ok for now and here it is:

// Vagrantfile

   if !data['vagrant']['host'].nil? = data['vagrant']['host'].gsub(':', '').intern
+  config.vm.provision :shell, :path => 'puphpet/shell/'

And place the dev file inside ${VAGRANT_CORE_FOLDER}/files/dot/dev:

#!/usr/bin/env bash

bldr=$(which bldr)
if [ -x "$bldr" ] ; then
    $bldr install
    $bldr run $1
    if [ ! -f ./bldr.phar ]; then
        curl -sS | php
    ./bldr.phar install
    ./bldr.phar run $1

And also the shell under ‘puphpet/shell/':


VAGRANT_CORE_FOLDER=$(cat '/.puphpet-stuff/vagrant-core-folder.txt')

sudo su
chmod +x "${VAGRANT_CORE_FOLDER}/files/dot/dev"
cp "${VAGRANT_CORE_FOLDER}/files/dot/dev" '/usr/local/bin/dev'

The last trick is with puppet-librarian which you can ruby install it. This tool is like a composer but for modules. So all that is under vagrant/puphpet/puppet/modules should be added to .gitignore on your project and you just need to run puppet-librarian install like in composer.


In order to get the right sha1 to which your working puppet was working then just switch to the right Puppetfile replacing it with:

forge ""
mod 'puppetlabs/apache',
    :git => '',
    :ref => 'f12483b'
mod 'puppetlabs/apt',
    :git => '',
    :ref => '1.4.2'
mod 'puphpet/beanstalkd',
    :git => '',
    :ref => '5a530ff'
mod 'tPl0ch/composer',
    :git => '',
    :ref => '1.2.1'
mod 'puppetlabs/concat',
    :git => '',
    :ref => '1.1.0'
mod 'elasticsearch/elasticsearch',
    :git => '',
    :ref => '0.2.2'
mod 'garethr/erlang',
    :git => '',
    :ref => '91d8ec73c3'
mod 'puppetlabs/firewall',
    :git => '',
    :ref => '1.1.1'
mod 'actionjack/mailcatcher',
    :git => '',
    :ref => 'dcc8c3d357'
mod 'puppetlabs/mongodb',
    :git => '',
    :ref => '0.8.0'
mod 'puppetlabs/mysql',
    :git => '',
    :ref => '2.3.1'
mod 'jfryman/nginx',
    :git => '',
    :ref => 'v0.0.9'
mod 'puppetlabs/ntp',
    :git => '',
    :ref => '3.0.4'
mod 'puphpet/php',
    :git => '',
    :ref => 'a1dad7828d'
mod 'puppetlabs/postgresql',
    :git => '',
    :ref => '3.3.3'
mod 'puphpet/puphpet',
    :git => '',
    :ref => '9253681'
mod 'example42/puppi',
    :git => '',
    :ref => 'v2.1.9'
mod 'daenney/pyenv',
    :git => '',
    :ref => '062ae72'
mod 'puppetlabs/rabbitmq',
    :git => '',
    :ref => '5ce33f4968'
mod 'puphpet/redis',
    :git => '',
    :ref => 'd9b3b23b0c'
mod 'maestrodev/rvm'
mod 'puppetlabs/sqlite',
    :git => '',
    :ref => 'v0.0.1'
mod 'petems/swap_file',
    :git => '',
    :ref => '39582afda5'
mod 'nanliu/staging',
    :git => '',
    :ref => '0.4.0'
mod 'ajcrowe/supervisord',
    :git => '',
    :ref => '17643f1'
mod 'puppetlabs/stdlib'
mod 'puppetlabs/vcsrepo',
    :git => '',
    :ref => '0.2.0'
mod 'example42/yum',
    :git => '',
    :ref => 'v2.1.10'

That is it! Encouragements in all good! and give retweets please! Site now Bears Documentation: Contribute Linking Other Commands!

Tonight I spend sometime on the repository which is the site source code that is served over

The site is built on sculpin and for a long time I was struggling with solving the problem of getting the command documentation over to that page in a more automated way.

So i connected the documentation link on the top navigation bar to a documentation page.

Screenshot 2014-09-26 05.17.37

The documentation page right now bears a couple of linked commands. However when you click on one of them for instance you will land into the commands pages. Right now they need some contributions to beautify them, but the essence is there:

Screenshot 2014-09-26 05.19.28

The generation takes advantage of a script located in the gushphp/gush repo, generate_docu. This script writes .md files formatted so that they are ready to be placed under source/commands folder under the gushphp/gush-site repository.

Soon we will connect all the commands, we need help now just making it look prettier and complete the links, will you help us please!


Doctrine Migrations with Schema API without Symfony: Symfony CMF SeoBundle Sylius Example

Usually when we have a project in Symfony we rely on doctrine migrations bundle. The problem with this approach is often that is too magic and have some misunderstandings on how to do proper migrations. There was a rewrite of the doctrine/migrations package by @beberlei sometime ago. Even though it is a pending PR to this day, I believe several other frameworks and community can benefit from using the component standalone with doctrine DBAL or other.

Here I will show how I used this component to do my migrations using the Schema API from Doctrine. Let’s start!

First let’s require the puppy:

    // inside your composer.json add and run update doctrine/migrations
    "doctrine/migrations":  "dev-Rewrite@dev"

Let’s dive into the command that will run things for us. Yes is a Symfony command and if you don’t want it to become container aware, you can easily inject the migrations service. However for those locked up in the den here is a version coupled:

namespace Vendor\DieBundle\Command;
use Doctrine\Migrations\Migrations;
use Symfony\Bundle\FrameworkBundle\Command\ContainerAwareCommand;
use Symfony\Component\Console\Input\InputArgument;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
class MigrationCommand extends ContainerAwareCommand
    const COMMAND_SUCCESS = 0;
    const COMMAND_FAILURE = 1;
    /** @var  Migrations */
    protected $migrations;
    protected function configure()
            ->setDescription('migrates database')
            ->addArgument('step', InputArgument::REQUIRED, 'step')
This command has a mandatory argument that needs to be either: repair, boot, migrate or info.
    protected function execute(InputInterface $input, OutputInterface $output)
        $this->migrations = $this->getContainer()->get('vendor_migrations');
        switch ($step = $input->getArgument('step')) {
            case 'repair':
            case 'boot':
                try {
                } catch (\Exception $e) {
                    $output->writeln('Already booted/initialized.');
            case 'migrate':
            case 'info':
                $status = $this->migrations->getInfo();
                $output->writeln($status->isInitialized() ? 'is initialized' : 'is not initialized');
                $output->writeln('Executed migrations:');
                foreach ($status->getExecutedMigrations() as $migration) {
                    /** @var \Doctrine\Migrations\MigrationInfo $migration */
                    $output->writeln(sprintf('  - %s', $migration->getVersion()));
                $output->writeln('Outstanding migrations:');
                foreach ($status->getOutstandingMigrations() as $migration) {
                    $output->writeln(sprintf('  - %s', $migration->getVersion()));
        $output->writeln(sprintf('Executed step %s', $step));
        return self::COMMAND_SUCCESS;

This command uses certain services coming from the package that we have not seen yet. It allows us to run the migrate command. The initialization thing is just to plug a table for migrations tracking. The info outputs nicely a list of the migrations from this table showing executed migration versions and also outstanding migrations.

Let’s take a deep look into the extension that sets these services for us (of course this is coded added by me to accomplish setup):

    // inside your extension class
class VendorDieExtension extends Extension
     * {@inheritDoc}
    public function load(array $configs, ContainerBuilder $container)
        // ...
    private function defineMigrationsServiceAndConfiguration(ContainerBuilder $container)
        $migrationsArray = array(
            'db' => array(
                'driver' => $container->getParameter('database_driver'),
                'host' => $container->getParameter('database_host'),
                'port' => $container->getParameter('database_port'),
                'dbname' => $container->getParameter('database_name'),
                'user' => $container->getParameter('database_user'),
                'password' => $container->getParameter('database_password')
            'migrations' => array(
                'script_directory' => $container->getParameter('kernel.root_dir').'/Migrations/',
                'allow_init_on_migrate' => true,
                'validate_on_migrate' => true,
                'allow_out_of_order_migrations' => false
        $container->setParameter('vendor_migrations_array', $migrationsArray);
        $factory = new Definition('Doctrine\Migrations\DBAL\Factory');
        $container->setDefinition('vendor_migrations_factory', $factory);
        $migrations = new Definition('Doctrine\Migrations\Migrations');
        $container->setDefinition('vendor_migrations', $migrations);

With this we have created the service vendor_migrations which we call inside our command. All is set and we can see that we have specified where will our migrations be in our project. So let’s take a look at a sample migration:

use Doctrine\DBAL\Schema\Column;
use Doctrine\DBAL\Schema\ForeignKeyConstraint;
use Doctrine\DBAL\Schema\Table;
use Doctrine\DBAL\Schema\TableDiff;
use Doctrine\DBAL\Types\Type;
use Doctrine\Migrations\DBAL\DBALMigration;
use Doctrine\DBAL\Connection;
use Doctrine\DBAL\Schema\Index;
class V0002_from_prod_23092014_to_seo implements DBALMigration
    public function migrate(Connection $connection)
        $schemaManager = $connection->getSchemaManager();
        $seoTable = new Table('cmf_seo_metadata');
        $seoTable->addColumn('id', 'integer', array('length' => 10000, 'notnull' => true, 'autoincrement' => true));
        $seoTable->addIndex(array('id'), 'id');
        $seoTable->addColumn('title', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('metaDescription', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('metaKeywords', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('originalUrl', 'string', array('length' => 255, 'notnull' => false));
        $seoTable->addColumn('extraNames', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $seoTable->addColumn('extraProperties', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $seoTable->addColumn('extraHttp', 'string', array('length' => 1000, 'notnull' => false, 'comment' => '(DC2Type:array)'));
        $tableDiff = new TableDiff('vendor_media_items', array(new Column('seo_metadata', Type::getType('integer'), array('notnull' => false))));
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_ADE411B7AEB39536');
        $schemaManager->createConstraint($keyConstraint, 'vendor_media_items');
        $schemaManager->createIndex(new Index('UNIQ_ADE411B7AEB39536', array('seo_metadata'), true), 'vendor_media_items');
        $tableDiff = new TableDiff('sylius_product', array(new Column('seo_metadata', Type::getType('integer'), array('notnull' => false))));
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_677B9B74AEB39536');
        $schemaManager->createConstraint($keyConstraint, 'sylius_product');
        $schemaManager->createIndex(new Index('UNIQ_677B9B74AEB39536', array('seo_metadata'), true), 'sylius_product');
        $tableDiff = new TableDiff('sylius_taxon', array(
                new Column('seo_metadata', Type::getType('integer'), array('notnull' => false)),
                new Column('meta_title', Type::getType('string'), array('notnull' => false)),
                new Column('meta_description', Type::getType('string'), array('notnull' => false)),
        $keyConstraint = new ForeignKeyConstraint(array('seo_metadata'), 'cmf_seo_metadata', array('id'), 'FK_CFD811CAAEB39536');
        $schemaManager->createConstraint($keyConstraint, 'sylius_taxon');
        $schemaManager->createIndex(new Index('UNIQ_CFD811CAAEB39536', array('seo_metadata'), true), 'sylius_taxon');
        // forgotten change from before
        $tableDiff = new TableDiff('sylius_variant', array(), array(), array(new Column('revision', Type::getType('integer'))));

The migration is from a project using Sylius and the Symfony CMF SEO Bundle. That is why the names look familiar :). It is a real working example!

Don’t be scared, this above is the translation of the SQL output gotten from app/console doctrine:schema:update –dump-sql, but digested with the Schema API from doctrine. I actually finds it makes more sense to write this migration as we develop or add model persistence in a project and just verifying this with the dumper. I hardly found good documentation on the API but finally after some troubleshooting feel a lot comfortable with this API and also the new version of the doctrine/migration library.

The migration runs like a charm and tested on development we can be sure it will work well on production. There is a suggestion to use tools external to PHP however this is a no go since projects like wordpress or others would want their plugins to do these migrations. So this is a good option!

Encouragements in all good, and please retweet to support me writing.


Testing with the StreamWrapper File System Libraries

I was working on Bldr, the nice task runner tool to automate development. Some days ago I ran into a problem. Usually when you work with bldr-io, it has you create at least several files on the project root directory. We have the bldr.json to specify the external dependencies.

For instance in my case I want to use gush-bldr-block so I have:

    "require": {
        "bldr-io/gush-block": "dev-master@dev"
    "config": {
        "vendor-dir": "app/build/vendor",
        "block-loader": "app/build/blocks.yml"

Before I couldn’t have pointed to block-loader successfully. It was not finding the blocks that it downloaded under the app/build/vendor folder. Feel confused? Let me rewind. Bldr has composer built-in. This means that you run bldr install to read the bldr.json and install additional dependencies to the composer.json native project dependencies. Bldr will use your composer.json and in addition the bldr.json to create its autoload. The idea is that in its autoload it will load blocks or third party packages that will be used to build your project.

Why would you need to add more dependencies to your project? Because packages to build projects can be reused. Common tasks for maintenance and others can be packaged in this way. Now if we build a package that you don’t like but it is included with bldr it will be bloated unnecessarily. By doing this then bldr allows for customization of the same tool. Some other tools choose extensions however you have to plug their dependencies into the composer.json. Bldr’s embedded composer splits these dependencies and yet ensures they are compatible or compatible with your project packages other requirements.

    - { resource: app/build/profiles.yml }
    - { resource: app/build/tasks.yml }

    name: vendor/project
    description: some description

Bldr needs files like bldr.yml, .bldr folder that contains tasks.yml and profiles.yml if you do import these from bldr.yml. In addition it will generate a bldr.lock because of the bldr.json. That is a lot of files. Since this is configuration related, I think it is better to store them under app/build say on a project with an app folder that stores mostly configuration.

So Instead of having all these files spread out into the root of the project we move them there and it looks very clean.

Screenshot 2014-09-22 02.32.17
You can keep creating your custom blocks and rerun bldr install.

So we saw the need of a PR, now let’s look at how I wrote the PR and the test for it. The main change happened because bldr had harcoded the path for finding the third party blocks:

     public function getThirdPartyBlocks()
         /** @var Application $application */
         $application = $this->get('application');
-        $blockFile   = $application->getEmbeddedComposer()->getExternalRootDirectory().'/.bldr/blocks.yml';
+        $embeddedComposer = $application->getEmbeddedComposer();
+        $config = $embeddedComposer->getExternalComposerConfig();
+        $loadBlock = $config->has('block-loader') ? $config->get('block-loader') : '.bldr/blocks.yml';
+        $blockFile = $embeddedComposer->getExternalRootDirectory().DIRECTORY_SEPARATOR.$loadBlock;

Instead of hard coding it to /.bldr/blocks.yml now we ask from the embedded composer instance that has already read the bldr.json to give us that information under block-loader key. This is in turn constructed based on the root directory also provided by embedded composer.

Because the ContainerBuilder class in which we were tests for the existence of the blocks.yml file under certain folders we have to come up with a cleaner way to test for file existence without polluting the folders with fixtures.

         "phpunit/phpunit":           "~4.2.0",
+        "mikey179/vfsStream":        "~1.4.0",

Welcome to vfsStream package! This little thing is very good for testing using the stream wrapper from PHP. Let’s see the final test:

 * This file is part of
 * (c) Aaron Scherer <>
 * This source file is subject to the MIT license that is bundled
 * with this source code in the file LICENSE
namespace Bldr\Test\DependencyInjection;
use Bldr\DependencyInjection\ContainerBuilder;
use org\bovigo\vfs\vfsStream;
 * @author Luis Cordova <>
class ContainerBuilderTest extends \PHPUnit_Framework_TestCase
     * @var ContainerBuilder
    protected $containerBuilder;
    protected $application;
    protected $input;
    protected $output;
    protected $root;
    public function setUp()
        $this->application = $this->getMockBuilder('Bldr\Application') ...
        // ...
        $this->root = vfsStream::setup();
    public function testGetThirdPartyBlocks()
        $embeddedComposer = $this->getMockBuilder('Dflydev\EmbeddedComposer\Core\EmbeddedComposer')
        // ...
        $config = $this->getMockBuilder('Composer\Config')
        // ...
        $bldrFolder = vfsStream::newDirectory('build')->at($this->root);
            ->withContent('[ \stdClass, \stdClass ]')
        $this->containerBuilder = new ContainerBuilder(
        $this->assertCount(2, $this->containerBuilder->getThirdPartyBlocks());

I have tried to shorten the mocking part, but you can see clearly that to match a file_exists($file) call from the code we have to create into the vfs system a root project directory and then the folders using the vfsStream::method API. It is pretty simple. One of the things in which I struggle was that for the root project folder you have to provide the reference always, i.e. vfsStream::url(‘root’)).

Now we see there is not a real file created or anything. All is done via the little library vfsStream that plays well with phpunit.

Hope it helps! Please retweet, and after if you want to see the code I have the link to the PR on the top part of this blog post. Thanks for reading!

Arrivederchi Ciao Chau Symfony … Welcome Standalone Dumper & C Extension

You start with requiring the dependency on the component like this:

"patchwork/dumper": "~1.2@dev",

The component also does not get tagged very often so you will need master to get the function twig below explained. You can install it also together with ladybug to compare :).

After adding the above:

"raulfraile/ladybug": "~1.0.8",
"patchwork/dumper": "~1.2@dev",

You can require the bundle from ladybug too if you want for the twig function. You can also plug the dumper twig function plugging the bundle that already comes within the patchwork/dumper:

if ('dev' === $this->getEnvironment()) {
            // ...
            $bundles[] = new Symfony\Bundle\DebugBundle\DebugBundle();
            $bundles[] = new Symfony\Bundle\WebProfilerBundle\WebProfilerBundle();
            // ...

Of course as usual I do not recommend to load bundles. So what I do instead is to plug the twig extension straight to the kernel if possible. Now it is time to focus on the component. After this DebugBundle is plugged you can invoke it from your view too:

The function dumps to the same view the variable var:

{{ dump(var) }}

Where as the dump tag only presents a screenshot in miniature on the symfony WDT (Web Development Toolbar) and waits for you to click there so that it will take you to the sidebar-ed view in white that is also expandable:

{% dump var %}

I fully prefer the dumping directly on the view since it is less time wasted. Also it works the same as in the cli so there is nothing new to learn. I tested side to side to ladybug and I think ladybug still has some information that can get expanded and it is what I am more accustomed to. However ladybug often breaks whereas this component is very solid and integrates nicely with symfony and has a dark background which I like very much.

Some examples borrowed from the PR in symfony follow. Notice the beautiful dumping of the container:

Source: patchwork dumper component on github.

Now a comparison side to side of dump vs var_dump:

Screenshot 2014-09-17 21.20.32 Source: patchwork dumper component on github and my desktop screenshot.

Ladybug colors fall short a bit, but still is a good option. However the reach of the dumper component is way beyond since it comes with a C extension that can be compiled:

sudo make install

However honestly until this is easy to install as:

brew install php56-symfony_debug

I will not be able to use it nor I want to use it since I rather fallback to the php implementation.

The view of ladybug is expandable in a nice way:

Source: Github LadybugBundle.

Notice ladybug has also a bundle but I rather don’t use the bundle though sometimes it comes handy for a dump on the view. The thing is I think many other packages would like to use this component totally independent from symfony, however creating a dependency on this will pull several other dependencies right away:

"require": {
        "php": ">=5.3.3",
        "symfony/config": "~2.4",
        "symfony/dependency-injection": "~2.2",
        "symfony/event-dispatcher": "~2.3",
        "symfony/http-kernel": "~2.3"
    "require-dev": {
        "twig/twig": "~1.12"

Really a bunch more since the DI thing and others will be a load of things.

What I think I will use in my projects will be the component but will try to untie as much stuff as possible and wire things with say the aura DI or try at least. The component in itself should just be invokable via:

use Symfony\Component\VarDumper\VarDumper as Dumper;

And here is the implementation detail:

    public static function dump($var)
        if (null === self::$handler) {
            $cloner = extension_loaded('symfony_debug') ? new ExtCloner() : new PhpCloner();
            $dumper = 'cli' === PHP_SAPI ? new CliDumper() : new HtmlDumper();
            self::$handler = function ($var) use ($cloner, $dumper) {
        return call_user_func(self::$handler, $var);
    public static function setHandler($callable)
        if (null !== $callable && !is_callable($callable, true)) {
            throw new \InvalidArgumentException('Invalid PHP callback.');
        $prevHandler = self::$handler;
        self::$handler = $callable;
        return $prevHandler;

Let the reader understand a bit. Basically it uses a cloner within the handler that gets called finally to dump the variable $var. It switches between the html or cli dumpers accordingly. And depending on if the C extension is preloaded it will choose the respective XCloner::class. The high level handler callable will use the respective implementations to do the final job invoking each.

That is it! So before 2.6, even now you can take advantage of this nice component and plug it into your say AuraPHP project or other. It will rock and you will not have to wait for 2.6 release nor for the big merge of the PR that smashes it into the whole of bulk of symfony/symfony.

If someone plugs the C extension on brew please let me know!