drush uli gets “access denied” using vagrant

I noticed randomly “drush uli” would just result in “access denied” when Drupal was inside of a Vagrant machine.

The funny this was, if I waited a little bit, and tried the user login link again, it worked fine.

The problem is the Vagrant machine’s clock may be only a few seconds ahead of your local machine.

So the token that uli generates isn’t valid until then.

Solution: install “ntp” on both your host and inside the Vagrant machine.

dynamic drush alias files

Alias files are useful. If you have many sites on a remote server, instead of manually adding them to your alias file every time, write some code to automatically generate them.

[php title=”~/.drush/dynamic.aliases.drushrc.php”]
$domains = explode(“\n”, trim(shell_exec(“ssh user@myserver.example.com ‘ls /some/path'”)));
// now $domains is a list of directories on your remote server

// set up an alias for each one
foreach ($domains as $domain) {
$aliases[$domain] = array(
‘root’ => “/path/to/$domain/public_html”,
‘remote-host’ => ‘some-server.com’,
// if necessary, URI other than default
‘uri’ => $domain,
‘remote-user’ => ‘user_if_necessary’,
);
}
[/php]

Now whenever a site is configured on your target server, you will automatically have the alias!

consider versioning your .drush directory

Our team uses Drush frequently during the entire development workflow for doing things like grabbing database dumps of sites and running commands – drush make, registry rebuild, custom company-specific ones, etc. – and in the past everyone would have to manually download or copy them to their .drush.

Now, we version the .drush directory, so when a new developer onboards, they can just checkout the .drush directory from version control.

This is incredibly useful for

You can build a very powerful devops toolkit across all team members since everyone will have the same Drush setup!

a faster alternative to sql-sync

Where I work I probably load environments about 50 times a day. Testing bug fixes, data migrations, reproducing errors, failure analysis, and so on.

Even if I can save 30 seconds with an automated database reload process, it will add up.

There’s been work on improving drush sql-sync, including https://drupal.org/project/drush_sql_sync_pipe

The bottleneck is that drush sql-sync works with temporary files – meaning it has to:

  1. Connect to the remote machine
  2. Perform a sql-dump to a file on the remote machine and compress it
  3. Transfer that file to your machine
  4. Restores the dump to database

The problem with this is that each step is executed consecutively. It would be better if all these steps were performed concurrently. Drush defaults to this method because it is compatible with most systems. If you’re a power user though, you may want a find a faster solution.

What we’d like to do is

  1. Connect to the remote machine
  2. Perform these steps at the same time
    1. Read the file remotely
    2. Compress on the fly
    3. Stream it to your local machine
    4. Uncompress on the fly
    5. Pipe sql to database

I wrote this little script that accomplishes just that and a little extra for dumping locally. The key is piping data instead of saving it temporarily. Note that this only works on Linux/Mac.

#!/bin/bash -x
drush -y sql-drop # this doesnt have an alias for a reason. only work locally
drush $1 sql-dump --gzip | gzip -cd | drush sqlc
drush -y updb
# Set last update date to now to prevent checking for updates for a bit
drush -y vset update_last_check `date +%s`
# Setting file paths (use default)
drush -y vdel file_directory_path
drush -y vdel file_public_path
# Clear cache
drush cc drush

Put this script somewhere (maybe ~/bin) and chmod a+x it.

From within your site directory, run `fastdump @someAlias`

This will

  1. Delete all the local tables (to ensure tables that don’t exist in your source are gone)
  2. Restore the database from an alias
  3. Run updates

But quickly! The next step for this would be making it into a Drush command instead of a shell script.

don’t kill your live site with a sql-sync

We have a shared alias file that represents every site that we work with. For example


@abcstage
@abctest
@abclive

are all valid aliases. Developers would have access to stage and test, while live only works for privileged users.

But, we still want to make sure that no funny business goes on.

Create a file, ~/.drush/policy.drush.inc

function drush_policy_sql_sync_validate($source = NULL, $destination = NULL) {
  if (strpos($destination, 'live') !== FALSE) {
    return drush_set_error(dt('Per ~/.drush/policy.drush.inc, you may never overwrite the production database.'));
  }
  if (strpos($source, 'stage') !== FALSE && strpos($destination, 'test') !== FALSE) {
    return drush_set_error(dt('Dumping from stage to test is a terrible idea.'));
  }
  if (strpos($source, 'stage') !== FALSE && strpos($destination, 'live') !== FALSE) {
    return drush_set_error(dt('Dumping from stage to live is even worse.'));
  }
}

This will ensure that nobody can accidentally sql-sync to a live site. You can adjust the criteria as need be.

protecting content profiles in drupal 6

Content profiles in Drupal 6, by default are plain old nodes, so if they are published everyone will have access to them.

This sets up a realm and restricts it to the profile owner.

Pulled from https://drupal.org/node/837220#comment-3147640 – but this is the gist of it.

/**
 * Implement hook_node_access_records().
 */
function custom_node_access_records($node) {
  if ($node->type == 'profile') {
    // Authors need access to their own private profile
    $grants[] = array(
      'realm' => 'custom_profile',
      'gid' => $node->uid,
      'grant_view' => TRUE,
      'grant_update' => TRUE,
      'grant_delete' => FALSE,
    );
    return $grants;
  }
}

/**
 * Implement hook_node_grants().
 */
function custom_node_grants($account, $op) {
  $grants['custom_profile'] = array($account->uid);
  return $grants;
}

Then rebuild node access.

xdebug makes PHP hang

I’ve found it’s a common misconception that you can’t have XDebug running all the time without impacting PHP performance.

There’s a few reasons you could be experiencing hangs or delays:

  • You have xdebug.remote_autostart set to 1.

    This will make XDebug try to contact your debug client on every PHP process. Web, console, whatever. Generally a bad idea! You could easily run into multiple requests trying to connect and stalling.

    It’s best to set this to 0, then use a browser extension to toggle your debugging session on or off, like:

    https://addons.mozilla.org/en-US/firefox/addon/easy-xdebug/

    The only time setting this to 1 is a good idea is when there is no other way to send the XDebug session start command (for example, when debugging an application that receives HTTP calls from a 3rd party machine).

    If you are doing console PHP, you can set an environment variable to toggle.
    On – export XDEBUG_CONFIG="idekey=netbeans-xdebug"

    Note it is important to set xdebug.remote_connect_back to 0, and remote_host to a valid host, because in console, XDebug won’t automatically know the location of your debugger from xdebug.remote_connect_back.

    While this is set, PHP run in CLI mode (like drush) will trigger debugging.

  • You had a debugging session open, and opened another one to the same debug client.

    For example, debugging index.php in your browser, opening a new tab, and debugging it again. The 2nd request will stall because your local debugger is busy.

    Suggestions: use the XDebug toggler to enable/disable debugging. You can start debugging in one browser tab, but turn it off for another.

  • Your application makes a URL call to itself, and you had xdebug.remote_autostart set to 1.

    (A combination of the above)

    The best example of this is using Drupal simpletests – you start a PHP process that connects to your debugger, but then the remote HTTP call inside of PHP also tries to connect. Your simpletests will stall indefinitely. Setting xdebug.remote_autostart to 0 ensures that the internal HTTP calls do not trigger XDebug.

    Caveat: Unfortunately, to actually debug a call inside of another call is somewhat complicated. You will have to turn off debugging in your browser/console, then manually inject

    ?XDEBUG_SESSION_START=mykey

    into your internal HTTP calls, disable xdebug.remote_connect_back, and set an xdebug.remote_host.

XDebug becomes incredibly flexible with xdebug.remote_connect_back and xdebug.remote_autostart – especially for debugging live servers where you do not want XDebug to take up any overhead. I’ll make another post on that soon – including bits about debug security and the debug proxy for handling multi-user debugging.

If you’re not set up with xdebug yet check out my quick start Linux + PHP + XDebug howto