Software
Propeople Blog: How We Use Vagrant In Our Drupal Development Workflow
A lot of Drupal companies have started adopting virtual development environments. This has a lot of benefits for unifying the way people collaborate on projects. The main idea behind it is to have everyone working on the same environment, following the same production set up. Using a virtual environment, you can standardize what versions of PHP, MySQL, Apache, Nginx, Memcache, Varnish, Solr, Sass/Compass libraries, etc. are used. This way, you do not have to worry about things like setting up your front-end developer on their Mac, Windows, or Linux machine with a bunch of software for a particular project. In this article I would like to share some thoughts on how we at Propeople utilize Vagrant, an open-source software that we use to create virtual development environments.
Starting point
When we start a Drupal project, we do not just keep the Drupal codebase in the repo. We also keep configuration files of Vagrant and the Drupal repo. We use configurations from puphpet.com as a starting point but add some customizations on top. Based on our production set up (version of operating system, web server, PHP versions) we generate configs and then adjust them to our needs. One example of such a code structure can be seen at https://github.com/podarok/ppdorg. Here is the basic structure:
IP address and tools
We use a virtual host for the project. For example, after bringing up the virtual machine, we can use the URL http://ppdorg.192.168.56.112.xip.io to access our development site. We use the xip.io service for building host names. In some projects, we also put a custom index.html so when you open http://192.168.56.112.xip.io you will see the list of tools available for the site builder.
Tools
We usually set up tools like Adminer.php, phpinfo.php and some others for developers. One of the regularly used scripts we include is a reinstall of the site (we build our sites as installation profiles) and pull_stage.sh – a script to pull the database and files from the staging or live environments.
Pull from staging
On Vagrant Box we install Drush and use it for syncing of the database and files from the remote environment. In order to have SSH login to that environment, we also copy the SSH keys to vagrant box. This can be done by adjusting the puphpet/shell/ssh-keyget.sh script by adding the following to the end of the script:
echo "Copy box ssh keys to ${VAGRANT_SSH_FOLDER}"
cp ${VAGRANT_CORE_FOLDER}/files/dot/ssh/box_keys/* ${VAGRANT_SSH_FOLDER}/
chown "${VAGRANT_SSH_USERNAME}" "${VAGRANT_SSH_FOLDER}/id_rsa"
chown "${VAGRANT_SSH_USERNAME}" "${VAGRANT_SSH_FOLDER}/id_rsa.pub"
chgrp "${VAGRANT_SSH_USERNAME}" "${VAGRANT_SSH_FOLDER}/id_rsa"
chgrp "${VAGRANT_SSH_USERNAME}" "${VAGRANT_SSH_FOLDER}/id_rsa.pub"
chmod 600 ${VAGRANT_SSH_FOLDER}/*
We generate keys, placing them in puphpet/files/dot/ssh/box_keys. Then we add the public key to our staging server and that is it. It is possible to “Vagrant SSH” to the box and then run the script to keep your database and files up to date.
When we pull a database from staging we, of course, sanitize email addresses and adjust settings (for example, switching to Sandbox environments of third party systems that we integrate with, etc.).
Drupal's settings.php file
On the Drupal side, we also create a sites/default/dev_settings.php file with all the settings for the database, memcache and any other parameters needed for development. So the only thing developers need to do is to copy this file to settings.php and run the reinstall.sh script to set the site up.
Configuration changes
Let’s say we have added memcache to the project, or Apache Solr. The only thing that we need to do is to commit the Vagrant configuration changes to the repo and ask everyone to run “vagrant provision”. Doing this will set everyone's environment to the current state. This is why we love Vagrant and see the benefits of using it every day.
Downsides of Using Vagrant
We need to have enough RAM. Usually we assign 1Gb for each box. Preferably, a development machine should have 8Gb of RAM because some of our projects have multiple boxes. Another preferable thing is having SSD. What this means is that if you have pretty decent laptop, you should have everything in place to try Vagrant out.
Conclusion
We find using Vagrant to be extremely effective. It allows our teams to work together more efficiently to deliver some of the biggest, most complex Drupal projects in the industry. Adopting a virtual development environment has had a positive effect on our development workflow, getting rid of past pressure points in our process. To learn more about how we can help your Drupal project succeed, please contact us.
Tags: VagrantDrupalDevelopmentService category: TechnologyCheck this option to include this post in Planet Drupal aggregator: planetTopics: Tech & DevelopmentStanford Web Services Blog: How I learned the hard way to create reusable classes
Drupal is (in)famous for providing an egregious amount of class selectors to target every layer imaginable in its rendered HTML. Some superstar culprits are Field Collections, Field Groups, and complex Views. When we see so many handy, available selector classes, it's so tempting just to target them directly in your CSS. But today, I want to share a lesson I learned the hard way about why you've just gotta resist that temptation, and instead create reusable classes.
Trellon.com: Avoiding Sass Version Differences with Bundler
If you are doing a lot of theming in Drupal with Sass and Compass, there's a good chance your stylesheets rely on specific versions of gems to compile properly. Mixins and functions can change, and sometimes gems rely on specific versions of other gems to work properly.
Drupalize.Me: Preparing for Drupal 8: PSR-4 Autoloading
Drupal Association News: Drupal.org team week notes #28
This week we are planning to deploy a solution for multiple values for listings of current companies and organizations, a fix for wrong 'open issues' count on user project page, an upgrade Fasttoggle to 7.x-1.5, and a few smaller patches.
To stay up-to-date with bigger Drupal.org changes and deployments you can subscribe to Drupal.org change notification emails. You will receive an update from us every Thursday.
Introducing Drupal.org Terms of Service and Privacy PolicySeveral days ago we published drafts of the Drupal.org Terms of Service and Privacy Policy. The drafts will become official documents on September 4th, 2014. The next three weeks is your opportunity to review the drafts and give us your feedback.
Previous deploymentsA lot of big and small changes went live since our last update. Most noticeably, we deployed:
- infrastructure and software changes to support semantic versioning of Drupal core releases and successfully created an 8.0.x branch,
- RESTful web services (RESTws module) for Drupal.org. This is an important first step towards making Drupal.org more easily integrated with other systems and services. We are actively working to document how the community should begin using these web services to improve how systems like Testbots and Dreditor interact with Drupal.org.
Other deployments include:
- Highlight active tab on /supporters page
- Update Organization supporters view to display all items
- Implement badges: add Hosting Supporters
- Update DrupalorgVersioncontrolLabelVersionMapperGit for semantic versioning in Drupal 8
- Packaging error: drupalorg_drush.drush.inc file access denied
- [Performance] Implement render_cache module for comments
- Comment render caching breaks comments with patches attached
- Implement hook_node_delete() to clear forum block cache when forum topic is being deleted
- Clean up /search page
- Add a hover state for all green buttons
- Need visual deliniation between post and signatures
- Navigation links in https://www.drupal.org/drupal-7.0 are not clickable
- Add distinctive color to css a:visited on D.o
- Create (or increase usage of) forum descriptions
- Add a reference to supporting organizations on projects
- Move field cache to memcache
- Project information project pages showing at the top of the page
Thanks to ergonlogic, FabianX, Steven Jones, DyanneNova and Jaypan for working with us on the issues listed above and making those deployments possible.
Drupal.org infrastructure newsThe Drupal.org CDN roll out is complete. There has been minimal issue with the EdgeCast CDN deployment on Drupal.org, however, we have successfully mitigated an issue which involved caching White Screen of Death (WSOD) pages in Varnish and the EdgeCast CDN.
The load balancer rebuilds are in progress and should be deployed sometime this month. The initial deployment of the updated load balancer failed. This prompted us to build out a load balancer staging environment to assist with testing and architecting a safer load balancer build. Once we finish testing this environment, we will push the changes to production.
Testing and configuration of the new Git servers also continues to progress. SELinux rules and a copy of cgit are now successfully running on the Git staging environment.
Two full time staff have started at the Drupal Association during the month of July. With new staff on board we have had time to brainstorm and talk through the bigger projects and goals for Drupal.org infrastructure. Our current focus is on our deployment process, with Archie Brentano leading development environment improvements, and Ryan Aslett leading the workflow and QA process (BDD testing) improvements.
Drupal.org User ResearchAt the end of July Whitney Hess visited the Drupal Association office in Portland (Oregon) and we started to summarize the information we collected during almost 30 user interviews we conducted previously. We already can say that some of the findings are unexpected and pretty interesting. We are looking forward to sharing those with the community once we have initial user personas developed.
---
As always, we’d like to say thanks to all volunteers who are working with us, and thanks to the Drupal Association Supporters who make it possible for us to work on these projects.
Cross-posting from g.d.o/drupalorg.
Follow us on Twitter for regular updates: @drupal_org, @drupal_infra
Personal blog tags: week notesDrupal Association News: Referral Traffic & Your Drupal Marketplace Listing
About a year ago, the Drupal Association switched Drupal.org and all of its associated properties from HTTP to HTTPS secure. We did this to better protect our users and their data, but it had the unfortunate consequence of making it much more difficult, if not impossible, to trace outbound referral traffic from Drupal.org.
I have a Marketplace listing. What does this mean for me?For companies with marketplace listings or any links pointing to their websites from Drupal.org, this means that, while you may be getting plenty of visitors to your website from Drupal.org or its associated properties, this will not show up in Google Analytics.
Unless you’ve tagged your links with campaign data, you’ll have no way of knowing who comes to your website from Drupal.org— though you will receive the SEO benefits from having that link in place.
How do I fix this? I want referral information!Fortunately, getting referral traffic data from Drupal.org can actually be really easy! All you have to do is tag the URLs on your organization page to include campaign information, and you can use this handy tool from Google to do it. If you’re not using Google Analytics, most analytics platforms have a URL builder that will let you add campaign tags.
That’s all there is to it. It’s an easy fix to the problem, and everybody wins: you’ll still get your referral information, and our users stay safe thanks to HTTPS!
Image courtesy of kongsky on freedigitalphotos.com
Drupal Easy: DrupalEasy Podcast 137: Are you the Drupal guy? (Dries Buytaert)
Dries Buytaert (Dries), founder and lead of the Drupal project and co-founder and CTO of Acquia joins Mike, Andrew, Ryan, and Ted for a very special episode of the podcast. We peppered Dries with questions on wide array of topics including Drupal 8 pace, semantic versioning, and initiatives, funding core development, Drupal 6 support, Acquia Lift, Acquia Engage, Dries’ score on the Acquia Certification Exam, the founding of Acquia, and not-exactly-why Taylor Swift was cut from one of his keynotes (phew!).
Trellon.com: Getting Started with Sass and Compass in Drupal
Looking to write CSS for your Drupal sites faster and easier, and eliminate common defects? The first step probably involves choosing a CSS preprocessor to make your stylesheets programmable and accelerate the production of code. There are several out there which are widely supported, but the ones we really like to use are SASS and Compass. This article explains how to set up these tools for use within Drupal.
Metal Toad: Attack of the PHP clones: Drupal, HHVM, and Vagrant
For those wanting to give it a spin, Metal Toad has added HHVM support to our Vagrant box: github.com/metaltoad/trevor.
groups.drupal.org frontpage posts: GSoC 10 Year Reunion / Mentor Summit
https://sites.google.com/site/gsocmentorsummitstudentreunion/home
Each year Google organizes a "Mentor Summit" after Summer of Code to help summarize the positive and negative experiences in an unconference style weekend of meetings. It is an awesome event and Drupal has participated several times.
"This year we are holding a GSoC Reunion instead of the traditional mentor summit. What does that mean you ask? First of all, we will be almost doubling the size of this event and having up to 600 attendees."..."Two representatives from each successfully participating organization are invited to Google to greet, collaborate and code. Our mission for the weekend: make the program even better, have fun and make new friends."-Google
Event Details @ https://sites.google.com/site/gsocmentorsummitstudentreunion/home
Who is going to attend the event and represent the Drupal community? Well...let us know if you're interested. Any student or mentor who has participated in Google Summer of Code with Drupal over the past 10 years is eligable to attend. One of our main goals is to reward students/mentors who deserve this all inclusive trip of a life time, but first we need to know who is actually available to attend. Event takes place 23 - 26 October, 2014 at the San Jose Marriott and Hilton San Jose. Please contact Slurpee ( https://www.drupal.org/user/91767 ) directly if you're interested in attending. Let us know about your GSoC experience and why you would be a good person to represent Drupal. Deadline to show interest is Tuesday August 19th at 23:59 -6 UTC. Please contact Slurpe prior to deadline and provide a bit of time for org admins to review everyone's feedback.
Google will fund two students and/or mentor delegates that have participated in GSoC in the last 10 years from each open-source organization. Google provides a $2200 stipend to each organization to split between the two delegates.Example, in the past a delegate from the states budgeted ~$700 and European delegate budgeted ~$1300.
The Reunion is a perfect chance for meeting and connecting with open-source experts from around the world and discuss about how to build the community around the software, learn from other projects and show how awesome the Drupal community is. It would be really great to showcase some of the different ways that the Drupal community use for being welcoming and encouraging to giving back to Drupal, so it would be expected to prepare a presentation for the unconference. Example topics could be "Growing Pains of a Community" or "Why is Drupal such a Welcoming Community?"
If you are willing to come, take into account that the Mentor Summit/Reunion program is quite absorbing because of the different activities, so if you never visited the beautiful area of San Diego before, maybe you want to come before or stay late after the Reunion for enjoying the city (extra nights and costs won't be covered by Google or Drupal in any case). Important to note that each delegate is responsible for initial travel costs and will be reimbursed after Google has paid Drupal (might take several months).
P.S. It is never too late to start planning for GSoC 2015! Please ping us with your thoughts/ideas/feedback or if you want to be a mentor/student @ https://groups.drupal.org/node/437638
AttachmentSize gsoc10year.png51.73 KBForum One: Installing Solr and Search API on Ubuntu 13.10 for Local Development
At Forum One, we standardize our local development environments using virtual machines provided by Vagrant, but a local dev environment native to a host OS is sometimes also useful. I recently found myself adding Apache Solr to my Ubuntu host’s web server stack for a Drupal project, and I wanted to share my experience.
I was running Ubuntu 13.10 (saucy) and, for my sanity and system integrity, I always try to manage as many packages as I can through the actual Ubuntu/Debian package management system, APT. There are many great references out there – Ben Chavet’s article from Lullabot got me most of the way there – but I want to rehash it here quickly, with specific instructions for my software stack.
Unfortunately, as most installation guides note, the packages in many Linux distribution package archives tend to have older versions of Solr; Ubuntu’s saucy package archive has Solr 3.6. So in order to take advantage of the last couple of years of Solr development – as recommended by most Drupal-related references – Solr was the one exception where I installed the software manually outside of the package manager.
Reviewing the Software Stack- Ubuntu 13.10 (saucy)
- Java 7 (openjdk-7 package)
- Tomcat 7 (tomcat7 package)
- Solr 4.8.1 (manual download/installation)
- Drupal 7.25
- Search API 7.x-1.11
- Search API Solr 7.x-1.4
We begin by installing Java from the command line.
sudo apt-get install openjdk-7 Installing TomcatNow we can install Tomcat from the command line.
sudo apt-get install tomcat7Once Tomcat is installed, we can verify that the Tomcat default web page is present by opening up a browser window and navigating to http://localhost:8080/.
Installing SolrDownload the latest version of Solr 4.8.1 from http://lucene.apache.org/solr/, expand the archive, and copy Solr’s Java libraries to the Tomcat library directory.
sudo cp solr-4.8.1/dist/solrj-lib/* /usr/share/tomcat7/lib/Next, copy Solr’s logging configuration file to the Tomcat configuration directory.
sudo cp solr-4.8.1/example/resources/log4j.properties /var/lib/tomcat7/conf/We will also need to copy the Solr webapp to the Tomcat webapps directory.
sudo cp solr-4.8.1/dist/solr-4.8.1.war /var/lib/tomcat7/webapps/solr.warThen, define the Solr context by modifying the solr.xml file.
sudo vim /var/lib/tomcat7/conf/Catalina/localhost/solr.xmlWe just need a context fragment pointing to the webapp file from above and to the Solr home directory.
<Context docBase="/var/lib/tomcat7/webapps/solr.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/usr/share/tomcat7/solr" override="true" /> </Context> Configuring SolrTo configure Solr, first create the Solr home directory.
sudo mkdir /usr/share/tomcat7/solrNext, copy the Solr configuration files to the Solr home directory.
sudo cp -r solr-4.8.1/example/solr/collection1/conf /usr/share/tomcat7/solr/Now we can verify that Solr is working by pointing our browser to http://localhost:8080/solr.
Finally, we’ll copy the Drupal Search API Solr Search module configuration files to the Solr home directory.
sudo cp sites/all/modules/contrib/search_api_solr/solr-conf/4.x/* /usr/share/tomcat7/solr/conf/ Defining the Solr CoreFirst, we’ll need to define our Solr core by editing solr.xml (the name of ‘drupal’ is arbitrary).
sudo vim /usr/share/tomcat7/solr/solr.xml <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="false"> <cores adminPath="/admin/cores"> <core name="drupal" instanceDir="drupal" /> </cores> </solr>Next, we’ll create the Solr core directory.
sudo mkdir /usr/share/tomcat7/solr/drupalThen, we’ll copy our base Solr configuration files to the core directory.
sudo cp -r /usr/share/tomcat7/solr/conf /usr/share/tomcat7/solr/drupal/Finally, we can verify the Solr core is available by browsing to http://localhost:8080/solr/#/~cores/drupal:
Now that Solr is up and running, add a Solr server within Drupal: use a “Solr Service” class on localhost with port 8080 and a path of: /solr/drupal:
Next, add an index, selecting the previously created server and other settings of your choice:
Head over to the Search API documentation and Search API Solr search documentation for more details.
Last Call Media: Design 4 Drupal By Design(er)
This was my second Design 4 Drupal, my first being just last year. As someone far on the “design” side of the “web design” spectrum (i.e., I know little to no code outside of cursory CSS), Design4Drupal is one of the only Drupal conferences I can go to and find content that’s directly applicable to what I do.
I made it to only a couple of sessions not counting our own. (Our own being the Case Study on our company website that I co-presented and Rob’s Responsive Javascript session, which I attended in a support capacity. Although I should have known he didn’t need me; Attendance for it was strong, unsurprisingly.) One of them was The 10 Commandments of Drupal Theming by David Moore. I thought it might help inform my design decisions when designing for Drupal. I knew right away when David said “There is one image in this presentation that isn’t a screenshot of code” that I had probably made a mistake. My suspicions were confirmed when the first Commandment turned out to be security-based. But that “one image”? Turns out it would redeem the session for me; It was Alien toys with little party hats on.
So that leaves one session that I attended that I could get practical information from, and luckily it turned out to do just that. It was Anti-Handoff: A Better Design & Front-end Relationship by Erin Halloway. Erin presented a number of methods for designers and developers to work more closely together, with the goal of reducing friction between the two and thereby producing a final site that is more finely designed. I was happy to find out that many of the strategies she outlined are already taken as given at Last Call; design and programming work hand-in-hand during all stages of site development, as opposed to the “Wham bam, thank you ma’am” handoff that apparently happens at other agencies. And we’re already taking care to only wireframe pages that need to be, and to only apply design to as many of those wireframes as is necessary to communicate the design to our client and give programming a complete picture of the site. The idea that Erin presented that I’m looking most forward to exploring is vertical rhythm. I hadn’t heard of it!
There were a couple points that I didn’t quite agree with Erin on, one being that I have Photoshop Actions that make exporting site assets such a snap that I’ve never had the urge to explore Photoshop’s Generator feature; If any cleanup of the automatically created folder of assets is required, there’s no way it could be efficient for me. (Not to mention that design trends and advancing technologies seem to be conspiring to reduce the number of images we use on any given site, sometimes nearly to nil.) But hey, that’s pretty much a tools preference, and different people often use different tools to complete the same job equally efficiently.
What’s funny is that, considering how in line Last Call is with Erin’s overall approach to handoffs (or the lack of them, rather), that we’re actually missing the baseline example she outlined; We don’t even have a meeting when a project moves from design to development. This is due to a simple fact; Up until very recently (just this week, in fact!) we were working out of a very small office, basically all sitting around the same table. We didn’t have a meeting because we didn’t need one. I’d lean over and hand the style guide to the lead dev, and they’d ask me questions as needed. The size of our office was not sustainable, but Erin’s session made me realize that it had actually instilled some very healthy practices into our business. Namely, that we’re all very connected, so communication is a snap, and we always work to make each other’s jobs as easy as possible. We’ll have to work to maintain this aspect of Last Call now that we’re in a bigger space (over eight times bigger!) And I’ll have to work extra hard; I’m moving to Baltimore later this month!
Midwestern Mac, LLC: Solr for Drupal Developers, Part 1: Intro to Apache Solr
It's common knowledge in the Drupal community that Apache Solr (and other text-optimized search engines like Elasticsearch) blow database-backed search out of the water in terms of speed, relevance, and functionality. But most developers don't really know why, or just how much an engine like Solr can help them.
I'm going to be writing a series of blog posts on Apache Solr and Drupal, and while some parts of the series will be very Drupal-centric, I hope I'll be able to illuminate why Solr itself (and other search engines like it) are so effective, and why you should be using them instead of simple database-backed search (like Drupal core's Search module uses by default), even for small sites where search isn't a primary feature.
As an aside, I am writing this series of blog posts from the perspective of a Drupal developer who has worked with large-scale, highly customized Solr search for Mercy (example), and with a variety of small-to-medium sites who are using Hosted Apache Solr, a service I've been running as part of Midwestern Mac since early 2011.
Why not Database?Apache Solr's wiki leads off it's Why Use Solr page with the following:
If your use case requires a person to type words into a search box, you want a text search engine like Solr.
At a basic level, databases are optimized for storing and retrieiving bits of data, usually either a record at a time, or in batches. And relational databases like MySQL, MariaDB, PostgreSQL, and SQLite are set up in such a way that data is stored in various tables and fields, rather than in one large bucket per record.
In Drupal, a typical node entity will have a title in the node table, a body in the field_data_body table, maybe an image with a description in another table, an author whose name is in the users table, etc. Usually, you want to allow users of your site to enter a keyword in a search box and search through all the data stored across all those fields.
Drupal's Search module avoids making ugly and slow search queries by building an index of all the search terms on the site, and storing that index inside a separate database table, which is then used to map keywords to entities that match those keywords. Drupal's venerable Views module will even enable you to bypass the search indexing and search directly in multiple tables for a certain keyword. So what's the downside?
Mainly, performance. Databases are built to be efficient query engines—provide a specific set of parameters, and the database returns a specific set of data. Most databases are not optimized for arbitrary string-based search. Queries where you use LIKE '%keyword%' are not that well optimized, and will be slow—especially if the query is being used across multiple JOINed tables! And even if you use the Search module or some other method of pre-indexing all the keyword data, relational databases will still be less efficient (and require much more work on a developer's part) for arbitrary text searches.
If you're simply building lists of data based on very specific parameters (especially where the conditions for your query all utilize speedy indexes in the database), a relational database like MySQL will be highly effective. But usually, for search, you don't just have a couple options and maybe a custom sort—you have a keyword field (primarily), and end users have high expectations that they'll find what they're looking for by simply entering a few keywords and clicking 'Search'.
Acquia: Displaying a Resultset from a Custom SQL Query in Drupal 7
Originally posted on Yellow Pencil's blog. Follow @kimbeaudin on Twitter!
In my last Drupal blog post I talked about how you can alter an existing view query with hook_views_query_alter, but what if you want to display a result set (your own ?view?) from a custom SQL query?
Well, here's how.
Drupal.org Featured Case Studies: Chatham House
Chatham House, home of the Royal Institute of International Affairs, is an independent policy institute and world-leading source of independent analysis, informed debate and influential ideas on how to build a prosperous and secure world for all.
They decided to rebuild their website to better promote their independent analysis and new research on international affairs, engaging with their audiences and disseminating their output as widely as possible in the interests of ultimately influencing all international decision-makers and -shapers.
They wanted a modern, responsive website with a clean design that would provide a better user experience and increased traffic. Additionally, one of the requirements was to integrate the website with their events and membership management software.
Chatham House have used Drupal as their CMS of choice since summer 2011. Their previous Drupal 6 based website was suffering from intermittent performance issues, a dated, non-responsive design and did not integrate with Chatham House’s internal membership management software.
Chatham House worked with Torchbox to produce a new, modern and responsive design on a new Drupal 7 instance. A lot of previous custom code was replaced with tried and tested combinations of contrib modules and a large, rolling content migration (more than 10,000 items) was required to move content from the old Drupal 6 site to the new Drupal 7 site.
Chatham House’s membership management software was also integrated with their new Drupal 7 website for user authentication and events content.
Key modules/theme/distribution used: Display SuiteFeaturesFeedsElectionLinkitMediaMenu blockMenu Trail By PathNagios monitoringCustom BreadcrumbsContextRabbit HoleSecure PagesoEmbedWorkbenchViewsWysiwygmothershipAppnovation Technologies: Bad Standards Irritate Me: Drupal Edition
As usual, for my blog posts on the Appnovation website, I’d like to kick off this article off with a disclaimer:
var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});Cheppers blog: Drupalaton 2014 - how we saw it
‘This is the nicest Drupal camp I’ve ever been to.’ These words are quoted from Steve Purkiss, who told this to me on the last Drupalaton night during the cruise party. I think this describes the Drupalaton experience the best - Drupal, summer, fun at the same time.
Dries Buytaert: Help me write my DrupalCon Amsterdam keynote
For my DrupalCon Amsterdam keynote, I want to try something slightly different. Instead of coming up with the talk track myself, I want to "crowdsource" it. In other words, I want the wider Drupal community to have direct input on the content of the keynote. I feel this will provide a great opportunity to surface questions and ideas from the people who make Drupal what it is.
In the past, I've done traditional surveys to get input for my keynote and I've also done keynotes that were Q&A from beginning to end. This time, I'd like to try something in between.
I'd love your help to identify the topics of interests (e.g. scaling our community, future of the web, information about Drupal's competitors, "headless" Drupal, the Drupal Association, the business of Open Source, sustaining core development, etc). You can make your suggestions in the comments of this blog post or on Twitter (tag them with @Dries and #driesnote). I'll handpick some topics from all the suggestions, largely based on popularity but also based on how important and meaty I think the topic is.
Then, in the lead-up to the event, I'll create discussion opportunities on some or all of the topics so we can dive deeper on them together, and surface various opinions and ideas. The result of those deeper conversations will form the basis of my DrupalCon Amsterdam keynote.
So what would you like me to talk about? Suggest your topics in the comments of this blog post or on Twitter by tagging your suggestions with #driesnote and/or @Dries. Thank you!
drunken monkey: Updating the Search API to D8 – Part 5: Using plugin derivatives
The greatest thing about all the refactoring in Drupal 8 is that, in general, a lot of those special Drupalisms used nowhere else were thrown out and replaced by sound design patterns, industry best practices and concepts that newcomers from other branches of programming will have an easy time of recognizing and using. While I can understand that this is an annoyance for some who have got used to the Drupalisms (and who haven't got a formal education in programming), as someone with a CS degree and a background in Java I was overjoyed at almost anything new I learned about Drupal 8, which, in my opinion, just made Drupal so much cleaner.
But, of course, this has already been discussed in a lot of other blog posts, podcasts, sessions, etc., by a lot of other people.
What I want to discuss today is one of the few instances where it seems this principle was violated and a new Drupalism, not known anywhere else (as far as I can tell, at least – if I'm mistaken I'd be grateful to be educated in the comments), introduced: plugin derivatives.
Probably some of you have already seen it there somewhere, especially if you were foolish enough to try to understand the new block system (if you succeeded, I salute you!), but I bet (or, hope) most of you had the same reaction as me: a very puzzled look and an involuntary “What the …?” In my case, this question was all the more pressing because I first stumbled upon plugin derivatives in my own module – Frédéric Hennequin had done a lot of the initial work of porting the module and since there was a place where they fit perfectly, he used them. Luckily, I came across this in Szeged where Bram Goffings was close by and could explain this to me slowly until it sank in. (Looking at the handbook documentation now, it actually looks quite good, but I remember that, back then, I had no idea what they were talking about.)
So, without (even) further ado, let me now share this arcane knowledge with you!
Plugin derivatives, even though very Drupalistic (?), are actually a rather elegant solution for an interesting (and pressing) problem: dynamically defining plugins.
For example, take Search API's "datasource" plugins. These provide item types that can be indexed by the Search API, a further abstraction from the "entity" concept to be able to handle non-entities (or, indeed, even non-Drupal content). We of course want to provide an item type for each entity type, but we don't know beforehand which entity types there will be on a site – also, since entities can be accessed with a common API we can use the same code for all entity types and don't want a new class for each.
In Drupal 7, this was trivial to do:
/**
* Implements hook_search_api_item_type_info().
*/
function search_api_search_api_item_type_info() {
$types = array();
foreach (entity_get_property_info() as $type => $property_info) {
if ($info = entity_get_info($type)) {
$types[$type] = array(
'name' => $info['label'],
'datasource controller' => 'SearchApiEntityDataSourceController',
'entity_type' => $type,
);
}
}
return $types;
}
?>
Since plugin definition happens in a hook, we can just loop over all entity types, set the same controller class for each, and put an additional entity_type key into the definition so the controller knows which entity type it should use.
Now, in Drupal 8, there's a problem: as discussed in the previous part of this series, plugins now generally use annotations on the plugin class for the definition. That, in turn, would mean that a single class can only represent a single plugin, and since you can't (or at least really, really shouldn't) dynamically define classes there's also not really any way to dynamically define plugins.
One possible workaround would be to just use the alter hook which comes with nearly any plugin type and dynamically add the desired plugins there – however, that's not really ideal as a general solution for the problem, especially since it also occurs in core in several places. (The clearest example here are probably menu blocks – for each menu, you want one block plugin defined.)
So, as you might have guessed, the solution to this problem was the introduction of the concept of derivatives. Basically, every time you define a new plugin of any type (as long as the manager inherits from DefaultPluginManager you can add a deriver key to its definition, referencing a class. This deriver class will then automatically be called when the plugin system looks for plugins of that type and allows the deriver to multiply the plugin's definition, adding or altering any definition keys as appropriate. It is, essentially, another layer of altering that is specific to one plugin, serves a specific purpose (i.e., multiplying that plugin's definition) and occurs before the general alter hook is invoked.
Hopefully, an example will make this clearer. Let's see how we used this system in the Search API to solve the above problem with datasources.
How to use derivativesSo, how do we define several datasource plugins with a single class? Once you understand how it works (or what it's supposed to do) it's thankfully pretty easy to do. We first create our plugin like normally (or, just copy it from Drupal 7 and fix class name and namespace), but add the deriver key and internally assume that the plugin definition has an additional entity_type key which will tell us which entity type this specific datasource plugin should work with.
So, we put the following into src/Plugin/SearchApi/Datasource/ContentEntityDatasource.php:
<?phpnamespace Drupal\search_api\Plugin\SearchApi\Datasource;
/**
* @SearchApiDatasource(
* id = "entity",
* deriver = "Drupal\search_api\Plugin\SearchApi\Datasource\ContentEntityDatasourceDeriver"
* )
*/
class ContentEntityDatasource extends DatasourcePluginBase {
public function loadMultiple(array $ids) {
// In the real code, this of course uses dependency injection, not a global function.
return entity_load_multiple($this->pluginDefinition['entity_type'], $ids);
}
// Plus a lot of other methods …
}
?>
Note that, even though we can skip even required keys in the definition (like label here), we still have to set an id. This is called the "plugin base ID" and will be used as a prefix to all IDs of the derivative plugin definitions, as we'll see in a bit.
The deriver key is of course the main thing here. The namespace and name are arbitrary (the standard is to use the same namespace as the plugin itself, but append "Deriver" to the class name), the class just needs to implement the DeriverInterface – nothing else is needed. There is also ContainerDeriverInterface, a sub-interface for when you want dependency injection for creating the deriver, and an abstract base class, DeriverBase, which isn't very useful though, since the interface only has two methods. Concretely, the two methods are: getDerivativeDefinitions(), for getting all derivative definitions, and getDerivativeDefinition() for getting a single one – the latter usually simply a two-liner using the former.
Therefore, this is what src/Plugin/SearchApi/Datasource/ContentEntityDatasourceDeriver.php looks like:
<?phpnamespace Drupal\search_api\Plugin\SearchApi\Datasource;
class ContentEntityDatasourceDeriver implements DeriverInterface {
public function getDerivativeDefinition($derivative_id, $base_plugin_definition) {
$derivatives = $this->getDerivativeDefinitions($base_plugin_definition);
return isset($derivatives[$derivative_id]) ? $derivatives[$derivative_id] : NULL;
}
public function getDerivativeDefinitions($base_plugin_definition) {
$base_plugin_id = $base_plugin_definition['id'];
$plugin_derivatives = array();
foreach (\Drupal::entityManager()->getDefinitions() as $entity_type_id => $entity_type_definition) {
if ($entity_type_definition instanceof ContentEntityType) {
$label = $entity_type_definition->getLabel();
$plugin_derivatives[$entity_type_id] = array(
'id' => $base_plugin_id . PluginBase::DERIVATIVE_SEPARATOR . $entity_type_id,
'label' => $label,
'description' => $this->t('Provides %entity_type entities for indexing and searching.', array('%entity_type' => $label)),
'entity_type' => $entity_type_id,
) + $base_plugin_definition;
}
}
return $plugin_derivatives;
}
}
?>
As you see, getDerivativeDefinitions() just returns an array with derivative plugin definitions – keyed by what's called their "derivative ID" and their id key set to a combination of base ID and derivative ID, separated by PluginBase::DERIVATIVE_SEPARATOR (which is simply a colon (":")). We additionally set the entity_type key for all definitions (as we used in the plugin) and also set the other definition keys (as defined in the annotation) accordingly.
And that's it! If your plugin type implements DerivativeInspectionInterface (which the normal PluginBase class does), you also have handy methods for finding out a plugin's base ID and derivative ID (if any). But usually the code using the plugins doesn't need to be aware of derivatives and can simply handle them like any other plugin. Just be aware that this leads to plugin IDs now all potentially containing colons, and not only the usual "alphanumerics plus underscores" ID characters.
A side note about nomenclatureThis is a bit confusing actually, especially as older documentation remains unupdated: The new individual plugins that were derived from the base defintion are referred to as "derivative plugin definitions", "plugin derivatives" or just "derivatives". Confusingly, though, the class creating the derivatives was also called a "derivative class" (and the key in the plugin definition was, consequently, derivative).
In #1875996: Reconsider naming conventions for derivative classes, this discrepancy was discussed and eventually resolved by renaming the classes creating derivative definitions (along with their interfaces, etc.) to "derivers".
If you are reading documentation that is more than a few months old, hopefully this will prevent you from some confusion.
Image credit: DonkeyHotey
Paragon-Blog: Performing DRD actions from Drush: Drupal power tools, part 2 of 4
Drupal Remote Dashboard (DRD) fully supports Drush and it does this in two ways: DRD provides all its actions as Drush commands and DRD can trigger the execution of Drush commands on remote domains. This blog post is part of a series (see part 1 of 4) that describes all the possibilities around these two powerful tools. This is part 2 which describes on how to trigger any of DRD's actions from the command line by utilizing Drush.