Software
Phase2: Simplify Your Logstash Configuration
As I mentioned in my recent post, I got a chance to upgrade the drupal.org ELK stack last week. In doing so, I got to take a look at a Logstash configuration that I created over a year ago, and in the course of doing so, clean up some less-than-optimal configurations based on a year worth of experience and simplify the configuration file a great deal.
The Drupal.org Logging SetupDrupal.org is served by a large (and growing) number of servers. They all ship their logs to a central logging server for archival, and around a month’s worth are kept in the ELK stack for analysis.
Logs for Varnish, Apache, and syslog are forwarded to a centralized log server for analysis by Logstash. Drupal messages are output to syslog using Drupal core’s syslog module so that logging does not add writes to Drupal.org’s busy database servers. (@TODO: Check if these paths can be published.) Apache logs end up in/var/log/apache_logs/$MACHINE/$VHOST/transfer/$DATE.log, Varnish logs end up in/var/log/varnish_logs/$MACHINE/varnishncsa-$DATE.log and syslog logs end up in /var/log/HOSTS/$MACHINE/$DATE.log. All types of logs get gzipped 1 day after they are closed to save disk space.
Pulling Contextual Smarts From LogsThe Varnish and Apache logs do not contain any content in the logfiles to identify which machine they are from, but the file input sets a path field that can be matched with grok to pull out the machine name from the path and put it into the logsource field, which Grok’s SYSLOGLINE pattern will set when analyzing syslog logs.
Filtering on the logsource field can be quite helpful in the Kibana web UI if a single machine is suspected of behaving weirdly.
Using Grok OverwriteConsider this snippet from the original version of the Varnish configuration. As I mentioned in my presentation, Varnish logs are nice in that they inclue the HTTP Host header so that you can see exactly which hostname or IP was requested. This makes sense for a daemon like Varnish which does not necessarily have a native concept of virtual hosts (vhosts,) whereas nginx and Apache default to logging by vhost.
Each Logstash configuration snippet shown below assumes that Apache and Varnish logs have already been processed using theCOMBINEDAPACHELOG grok pattern, like so.
filter { if [type] == "varnish" or [type] == "apache" { grok { match => [ "message", "%{COMBINEDAPACHELOG}" ] } } }The following snippet was used to normalize Varnish’s request headers to not include https?:// and the Host header so that therequest field in Apache and Varnish logs will be exactly the same and any filtering of web logs can be performed with the vhost andlogsource fields.
filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". match => [ "path", "/var/log/varnish_logs/%{HOST:logsource}" ] } # Grab the vhost and a "request" that matches Apache from the "request" variable for now. mutate { add_field => [ "full_request", "%{request}" ] } mutate { remove_field => "request" } grok { match => [ "full_request", "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" ] } mutate { remove_field => "full_request" } } }As written, this snippet copies the request field into a new field called full_request and then unsets the original request field and then uses a grok filter to parse both the vhost and request fields out of that synthesized full_request field. Finally, it deletesfull_request.
The original approach works, but it takes a number of step and mutations to work. The grok filter has a parameter calledoverwrite that allows this configuration stanza to be considerably simlified. The overwrite paramter accepts an array of values thatgrok should overwrite if it finds matches. By using overwrite, I was able to remove all of the mutate filters from my configuration, and the enture thing now looks like the following.
filter { if [type] == "varnish" { grok { # Overwrite host for Varnish messages so that it's not always "loghost". # Grab the vhost and a "request" that matches Apache from the "request" variable for now. match => { "path" => "/var/log/varnish_logs/%{HOST:logsource}" "request" => "https?://%{IPORHOST:vhost}%{GREEDYDATA:request}" } overwrite => [ "request" ] } } }Much simpler, isn’t it? 2 grok filters and 3 mutate filters have been combined into a single grok filter with two matching patterns and a single field that it can overwrite. Also note that this version of the configuration passes a hash into the grok filter. Every example I’ve seen just passes an array to grok, but the documentation for the grok filter states that it takes a hash, and this works fine.
Ensuring Field TypesRecent versions of Kibana have also gotten the useful ability to do statistics calculations on the current working dataset. So for example, you can have Kibana display the mean number of bytes sent or the standard deviation of backend response times (if you are capturing them – see my DrupalCon Amsterdam slides for more information on how to do this and how to normalize it between Apache, nginx, and Varnish.) Then, if you filter down to all requests for a single vhost or a set of paths, the statistics will update.
Kibana will only show this option for numerical fields, however, and by default any data that has been parsed with a grok filter will be a string. Converting string fields to other types is a much better use of the mutate filter. Here is an example of converting the bytes and the response code to integers using a mutate filer.
@TODO: Test that hash syntax works here!
filter { if [type] == "varnish" or [type] == "apache" { mutate { convert => { [ "bytes", "response" ] => "integer", } } } } Lessons LearnedLogstash is a very powerful tool, and small things like the grokoverwrite parameter and the mutate convert parameter can help make your log processing configuration simpler and result in more usefulness out of your ELK cluster. Check out Chris Johnson’s post about adding MySQL Slow Query Logs to Logstash!
If you have any other useful Logstash tips and tricks, leave them in the comments!
4Sitestudios.com Drupal Blog: Major Drupal 7 Security Vulnerability - Update Now!
Last week the Drupal security team announced the existence of a major security vulnerability in all versions of Drupal 7. This vulnerability is rated as “highly critical” because it allows an attacker to take full control of your site remotely, without needing to log in as a privileged user. Attacks using this vulnerability are already being reported.
If 4Site built your Drupal 7 site, if we handle your site maintenance, or if you're just looking for someone to help you apply the update on your site and keep your site secure, please contact us!
Bluespark Labs: Uninstalling and purging field modules all at once
Sometimes we want to uninstall a module from our Drupal site but we can't do it because we get this dependency: "Required by: Drupal (Field type(s) in use - see Field list)". Even if you delete the fields provided by the module via the UI or programmatically by executing field_delete_field() function you will get a new dependency "Required by: Drupal (Fields pending deletion)".
These dependencies are created by Drupal core to avoid that a module is uninstalled until all the data related to its fields is removed from the database, in order to maintain consistency.
This has several drawbacks, the first one being that you can't uninstall your module when you want, and you have to wait until all the field data values are removed from the database (The rather strangely named field_deleted_data_XX and field_deleted_revision_XX tables) and the meta-information stored in field_config and field_config_instance tables is removed. And most importantly, nobody actually knows when this is going to happen! These database rows are removed in batches on each cron task execution. So depending on our cron regularity and the amount of data stored in our field tables, this tasks can last for minutes to weeks.
This is a problem because, naturally, we want to uninstall our module now and not be forced to check periodically our production database to see if we are allowed to uninstall the module once all that information has been removed from the database.
To avoid such situations and regain control, you can perform all these tasks in a hook_update_N() function, forcing the deletion of all the information and finally uninstalling the module. You can check the code in the gist below:
The job is divided in three parts: The data definition, field data purge and module list clean.
In the data definition task we provide all the required data we need to perform the task, the name of the field to delete, and given that information, we get the field_info array and the name of the module to be uninstalled. Finally, field_delete_field() is executed.
After that the field data is purged in the batch body, and since we don't know how much data we will have to purge, we remove just 100 database rows per batch execution. After each purge we check if all the data has been removed to decide if we have to remove more data from the database or continue to the final part.
Once all the data and metadata related to the module is removed from the database, the Drupal field types dependency is gone and we are granted the ability to disable and uninstall our module cleanly. Finally, we can drop the empty field_deleted_data_XX and field_deleted_revision_XX tables to keep clean our database.
Using this approach, we have two key benefits: a. we are sure that the module is disabled and our database is clean, and b. we are confident that we can remove the module from our repository, given that in the next deploy we won't get any dependency conflict with that module.
Tags: Drupal PlanetVisitors Voice: That is why we sponsor the Search API Solr module
Gizra.com: Gizra - We've Got Your Headless Covered
The difficulties in creating a semi or fully decoupled site isn't in the RESTful part. Spitting out JSON is now covered by several modules, including RESTful which aims for a "best practices" solution.
One of the real problems, though, is how to prevent us, the community, from re-inventing the wheel over and over again. Basically, how do we package our frontend code similarly to how we package our generic backend code - AKA "modules". I discussed these problems, and offered some solutions in my "BoF" persentation:
Doug Vann: Drupal Training at Drupal Camps And Why We Need More Of It
Drupal Camp Road Warrior
By the end of 2014, I will have hit 50 Drupal Camps! It took 72 months to hit 22 cities, in 16 states! In that time, I've seen Drupal Camps run in almost every conceivable way possible. From Madison WI to Orlando FL, from NewYork NY to San Diego CA, I've seen thousands of attendees flocking to these events, all with the hopes of growing in their knowledge and understanding of Drupal. In my experience, the system works -- mostly.
But, we can do better.
We all know the drill
You assemble a bunch of speakers. They will deliver a bunch of sessions. You try to group these sessions into tracks, if you can. You wrestle with how to add a few sessions about the Drupal Community or maybe about Business or a few odd sessions that don't fit into your tracks. Oh yah... You almost forgot about the beginners, so you have a session or two that demystifies one topic or another.
The N00B experience
You would be surprised at how many people show up to a Drupal Camp who don't know what a node is. Or if they do know what a node is, they don't know how to create their own content types. Or if they do know how to create content types, they don't know how to create Views. These people show up and attend sessions that they have little chance of comprehending. They sit down for up to an hour per session listening to senior developers from major Drupal shops talk about nodes and fields and blocks and views-displays and modules. The whole time they may be thinking, "Dang! I thought by showing up for a day or two I would start picking this stuff up!?" But they're not.
Meet the N00Bs
Who are these people who are "New To Drupal?" Well, for starters, they're probably not really that new to Drupal! Based on my experiences, here is an incomplete list of ppl who regularly attend my classes.
- Certainly anyone who just discovered Drupal very recently and has come to the camp to gain a better understanding of Drupal. [This is not always the biggest portion]
- Individuals who have been to a couple camps and have tried to read the books or watch the videos but still haven't had the needed "AHA!" moments to grasp it all.
- Individuals who work for a University or Government or Company who uses, or is considering, Drupal. [This is a BIG ONE]
- People, often with other web skills [sys admins, java, asp, php, etc] who are sent by their employers to scope out Drupal and/or to learn how to use it.
- People coming to gain skills in an effort to alleviate their, or their employer's, dependency on vendors. [This happens a lot!]
- New hires to Drupal shops or Design shops or shops offering web related services who are looking to better provide Drupal related services.
- People who know plenty, but want to make sure they are properly grounded.
- People who come in the hopes of asking lots of questions!
I've seen all that and more. Multitudes of people are coming to camps in hopes of really wrapping their minds around how Drupal solves the modern problem of publishing dynamic content on the web. Too often, without a day of training they leave the camp with the same [and more] questions than they arrived with.
What they really want/need
After attending camp after camp, it's a proven fact. People are coming to learn what Drupal is and how to use it. If the camp has no full day training opportunity then many are going to drown in the other sessions and simply not get what they really need.
I'll just be frank at this point. I believe that every camp needs to have a full day of beginner training. I believe that this training should be delivered not across differing tracks with differing speakers, but by the same individual, or group of individuals, working together to provide the full day of training. I have done this time and time again and I see the relief on people's faces as they gain a practical understanding of the power and flexibility of Drupal and how they can leverage it. This day of training starts them down the road of really learning Drupal. If there's a 2nd day of camp, I can PROMISE you that they will get far more out of it after a day of training.
How to provide a day of training at a Drupal Camp
There are many ways! Here's a list that is, by no means, exhaustive.
- Some camps have a dedicated day just for trainings on the day before the regular camp.
- This is effecive not only for beginner classes but for classes on SEO, Drupal 8, Module Development, etc.
- Most often training takes place in the same location as the camp, but occasionally it is not.
- Some camps simply reserve one track and dedicate it to a full day of training.
- I've done this quite a few times where I have a room all day while others hop from session to session.
- This is easier if you can't dedicate a whole day to training.
- The content in the full day Drupal beginner's training.
- In some camps someone leads the class through the Acquia curriculum of Drupal In A Day
- Some camps have a vendor come in and do the training
- Doug Vann! If you want me to join your camp and present a day of training call me at 765-5-DRUPAL or CONTACT ME
- I've seen posts from BLINK REACTION & OSTRAINING about their various full day offerings at Drupal Camps as well.
- If I missed anyone who has travelled to multiple camps and provided full day trainings in the past and would do so again, leave a comment and I'll add you here. :-)
- Some camps have used the BuildAModule.com Mentored Training method.
- I've done a number of these as well and they're pretty amazing!
- More info at http://buildamodule.com/train
- The finances of a full day of training. Here's how I've experienced this as a trainer.
- Some camps offer it for free or as part of the Camp fee that attendees have already paid.
- Some camps charge attendees enough to cover the cost of catering.
- Some camps charge a flat fee per attendee and share a percentage with the trainer.
- Some camps procure a "training sponsor" and hand that sum off to the trainer.
Conclusion
Every Drupal Camp can do this! I've been invited to one-day camps and they give me one of their rooms for the whole day. I show up and deliver the full day of Drupal Beginner Training. Sadly, I never get to see any of the other sessions. Oh well... After 50 Drupal Camps, I've seen plenty of Drupal Sessions! :-)
Providing a full day of training will definitely raise your attendance. Universities, Governments, and Companies will send people. People will ask their employers to send them. Sponsors will really appreciate the fact that you're providing extra value to a broader audience.
Seriously folks... What more can I say?
Full Day Trainings at Drupal Camps is a Big Win for everyone involved!
Drupal Planet
Forum One: DrupalCon Amsterdam: Done and Deployed
DrupalCon Amsterdam 2014…what a week! Drupal 8 Beta released, core contributions made, and successful sessions presented!
Drupal 8 Beta — has a nice ring to it, don’t you think?! But what exactly does that mean? According to the drupal.org release announcement, “Betas are good testing targets for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch if necessary. Beta releases are not recommended for non-technical users, nor for production websites.” Or more simply put, we’re over the hump, but we’re not there yet. But you can help!
Contrib to CoreOne of the biggest focal points of this DrupalCon was contributing to Drupal 8 core in the largest code sprints of the year. Specially trained mentors helped new contributors set up their development environments, find tasks, and work on issues. This model is actually repeated at Drupal events all over the world, all year long. So even if you missed the Con, code sprints are happening all the time and the community truly welcomes all coders, novice or expert.
Forum One is proud that our own Kalpana Goel was featured as a mentor at DrupalCon Amsterdam. She is very passionate about helping new people contribute.
It was my third time mentoring at DrupalCon and like every time, it not only gave me an opportunity to share my knowledge, but also learn from others. Tobias Stockler took time to explain to me the Drupal 8 plugin system and walk me through an example. And fgm explained Traits to me and worked on a related issue.
-Kalpana Goel
Forum One Steps UpWhile the sprints raged on, other Forum One team members led training sessions for people currently developing with Drupal. I, Campbell, presented Panels, Display Suite, and Context – oh my! to a capacity crowd (200+), and together, we presented Coder vs. Themer: Ultimate Grudge Smackdown Fight to the Death to over three hundred coders and themers. Now that Drupal 8 Beta is released we’re already looking forward to creating a Drupal 8 version of Coder vs. Themer for both Los Angeles and Barcelona!
This year’s European DrupalCon was a huge success, and a lot of fun! As a group, our Forum One team got to take a leading role in teaching, mentoring, and sharing with the rest of the Drupal community. It’s easy to pay lip service to open source values, but we really love the opportunity to show how important this community is to us. We recently estimated that we contribute almost a hundred patches to Drupal contrib projects in a good month. We’re pretty proud of that participation, but it’s only at the conventions that we get to engage with other Drupalists face to face. DrupalCon isn’t just for the code, or the sessions. It’s for seeing and having fun with our friends and colleagues, too.
At Amsterdam, we got to participate in code sprints, lead sessions and BOFs (birds of a feather sessions), and join the community in lots of fun extracurricular activities. We’re already making plans for DrupalCon LA in the spring. We’ll see you there!
Drupal Watchdog: Drupal in the Age of Surveillance
On Feb. 11, 2014, Drupal.org – flagship site of the Drupal project – joined thousands of other websites in a campaign against state Internet surveillance dubbed “The Day We Fight Back.”
In announcing Drupal.org participation in the campaign, leading Drupal developer Larry Garfield made a strong link between free software and digital freedom: “Both the American and British governments have been found violating the digital privacy of millions of people in their own countries and around the world. That is exactly the sort of attack on individual digital sovereignty that Free Software was created to combat.”
What are the implications of recent surveillance revelations for Drupal site owners? What can and should Drupal site builders and developers be doing to protect user privacy? To find out, I spoke with analysts and developers both within and outside the Drupal community.
User Data and Threat Modeling“Contemporary websites have almost innumerable places where information can be entered, logged, and accessed, by either the first party or third parties.”
That’s the frank assessment of Chris Parsons, a postdoctoral fellow at The Citizen Lab at the University of Toronto’s Munk School of Global Affairs. Parsons’ current research focus is on state access to telecommunications data, through both overt mechanisms and signals intelligence – covert surveillance.
Parsons recommends an approach to user data protection called threat modeling. “So who are you concerned about, what do you believe your ethical duties of care are, and then how do you both defend against your perceived attackers and apply your duty of care?”
Parsons suggests, “The first step is really just information inventory: what’s collected, why, where’s it going, for how long.”
Lullabot: Drupal.org Initiatives
In this episode, Joshua Mitchell, CTO at the Drupal Association talks with Amber Matz about the exciting initiatives in the works for drupal.org and associated sites. We also talk about how the community, including the D.A. Board, working groups, and volunteers are utilized to determine priorities and work on infrastructure improvements. There's exciting changes in the works on drupal.org regarding automated testing, git, deployment, the issue queue, localize.drupal.org, and groups.drupal.org.
- Pacific NW Drupal Summit 2014
- Drupalhagen
- Lullabot is hiring
- Drupalize.Me is hiring a Developer+Trainer
- Highly Critical Security Advisory: SA-CORE-2014-005 - Drupal core - SQL injection
- Tips for Applying Today's Drupal Core Security Update (SA-CORE-2014-005)
- Contact Josh Mitchell
- Dries' Keynote at DrupalCon Amsterdam
Blink Reaction: Drupal As A Public Good and Renewing our Commitment
I was going to write a blog about Drupalcon Amsterdam and our commitment to Drupal and then I realized the best way to say it was to show it.
Thursday, October 16, 2014
Memo to all staff:
I am pleased to announce that starting this quarter Blink will significantly increase our efforts in support of Drupal.
NEWMEDIA: Drupal SA-CORE-2014-005
Here at NEWMEDIA! we are constantly learning and improving. Over the course of the past year we have been refining our continuous integration and hosting platforms as they relate to Drupal. A significant threat, and subsequent fix has been identifeid in all versions of Drupal 7 that has literally rocked the. The good news is that your site is already patched if you are hosting a Drupal 7 site with us. The great news is that we have an opportunity to highlight some of the improvements we have made to our hosting offering.
The new system provides a smoother flow between development efforts and your ability to see the changes. When a developer's code is accepted to your project, it is immediately made visible to you in a password protected staging environment. When the change is approved, it can immediately be made available on the production site. Our systems ensure that the servers developed on are identical to the servers in the staging and production environments. This consistency increases the return on your investment by decreasing the amount of time it takes for a developer to perform their tasks. At the same time, it gaurantees a smoother deployment pipeline.
We are systematically moving all of our hosting properties into this new system.
* Your sites will now be hosted in what is known as Amazon's Virtual Private Cloud. This is the next generation of Amazon's cloud offering that provides advanced network control and separation for increased performance and security.
* Your sites will move from a static ip address to utilize state of the art load balancing techniques. The load balancing and proxy layers provide significant protection agains DDoS and other types of attacks that might be utilized against a website.
* Your DNS management will simplify. The same technology we are using at the load balancing layer allows for a more dynamic system. Because we are moving from addressing the machines by numbers to addressing them by name we are allowed additional flexibility. For example, let's say your site is under a higher than average load. We could temporarily add additional webservers that would increase the performance of your site.
* Site performance will improve. You are being moved to a distributed system that is more capable of handling your sites needs.
The goal of this is to increase the quality of our services and offerings while continuing the tradition of giving back. It is unfortunate that a security issue of this magnitude has affected Drupal. It is good to see the community come together to help bring the current set of continuous integration and deployment practices to the next level. Come find us at the http://2013.badcamp.net/events/drupal-devops-summit to see how we do continuous.
Help us figure out the best way to share!
ERPAL: IMPORTANT! Safety first - The Drupal 7.32 Update
Yesterday, when the Drupal 7.31 SQL injection vulnerability came up, I think this was one of the most crititcal updates I ever saw in the Drupal world. First of all - thanks a lot to everybody that helped to find and fix this issue. With the discovering of this security issue and the fix, the Drupal security and the community behind has shown once more how important this combination is. All Drupal sites should and MUST be updated to this version 7.32 to keep their applications secure. An new ERPAL release 2.1 is already available. And it is very important that you use this update for your ERPAL installation.
Why this hurry?As I already mentioned above, this update is critical to all sites as the vulnerability can be executed by anonymous users. It is possible to get admin access (user 1) with the correct attack sequence. Some of you may ask if Drupal is still secure at all? The answer is still - YES! It is one of the most secure CMF / CMS out there. And with a dedicated security team on Drupal.org many security issues are discovered. Security issues are worst if they are not discovered by the admin / support or security team but only by hackers. And it becomes even worse if people don't update their sites.
So what to do?Don't panic! You just need to update your site to the latest Drupal 7.32 version. If you are using a distribution, that may have patches included in their installation profile to support all features, check for updates on their project page and get your update there. Easy - Thats it.
How to avoid future problemsPlease follow the Drupal security advisories and keep you site's modules up to date. That's one of the most important rules for Drupal users.
While creating business applications with Drupal means for us taking responsibility for all our users to keep their data save and their ERPAL system running. With this blog post I want to ask every Drupal dev, maintainer, client or site builder to update the site immediately.
Amazee Labs: Faster import & display with Data, Feeds, Views & Panels
Handling loads of data with nodes and fields in Drupal can be a painful experience: every field is put into a separate table which makes inserts and queries slow. In case you just want to import & display unstructured data without the flexibility and sugar of fields, this walkthrough is for you!
On a recent customer project, we were tasked with importing prices and other information related to products. While we are fine with handling 10k+ products in the database, we didn't want to create field tables for the price information to be attached to products. For every product, we have 10 maybe even more prices which would result in 100k+ prices at least.
The prices shouldn't be involved in anything related to the product search, they should just appear as part of the product view itself. Also there is no commerce system involved at the current state of the project.
Putting the prices into a separate field on the product node may sound like a good idea in the first place. Remember, when loading a list of of those products, all the prices will have to get loaded as well. We wanted those prices to be decoupled from the products, be stored in a lightweight way and only be loaded when necessary - on the single product view.
1) Light-weight data structures in Drupal using the data moduleFirst, I thought implementing a custom entity or just data table would be the way to go. But then we considered giving the data module a try. The data module allows site builders to work on a much lower level than with Drupal fields: you can create database tables, specify their columns and define relationships. What it really makes appealing is that you can access the structured data using views, expose the custom data tables as custom entity types and use the Feeds module for importing that data, without any coding required.
After installing the data module, you can manage your data tables under Structure > Data tables
We create a data table for the product prices and specify the schema with all the columns that should be included. Just like fields but without any fancy formatters on top of it:
This will create the desired database table for you.
Having defined the data, we can use the Entity data module that comes with Data to expose the data table as a custom entity type. By doing so, you will get integrations like for example with Search API for free.
2) Import using Feeds and the generic entity processor
Luckily, the [Meta] Generic entity processor issue for the Feeds module has been committed after 3 years of work. As there hasn't been a release since the time of committing the patch (January 2014), this is only available from later dev versions of the Feeds module.
But it's worth the hassle! We can now select from a multitude of different feeds processors based on all the different entity types in the system. After clearing caches, the data tables that we have previously exposed as entity types, do now show up:
The feeds configuration is performed as usual. In the following, we map all the fields from the clients CSV file to the previously defined columns of the data table:
We are now able to import large junks of data without pushing them through the powerful but slow Field API. A test import of ~30k items was performed within seconds. A nodes & Fields based import usually creates 200 items per minute.
3) Data is good, display is betterIn the next step, we create a View based on the custom data table to display prices for products. We specify a number of contextual filters so that users will see prices a) the current product and restricted to b) the user's price source and c) currency.
Notice, that the Views display is a (Ctools / Views) Content pane, which has some advanced pane settings in the mid section of the views configuration.
Most importantly, we want to specify the argument input: Usually we would use Context to map the views contextual filters to Ctools contexts that we provide through Panels.
Somehow, in this case, a specific field didn't work with the context system which automatically checks if all necessary context's are available and only allows you to use the Views pane under such circumstances. As you can see in the screenshot above, i have set all arguments to "Input on pane config" as a work around.
Exactly these pane config inputs show up when we configure the Views pane in Panels. In this case, we have added the Product prices view as a pane on the panelized full node display of the Product node type (Drupal jargons ftw!).
Each pane config is populated with the appropriate keyword substitutions based on available contexts node and user of the panelized node.
4) The end resultFinally this is the site builded result of a product node including a prices table:
This concludes my how-to on the Data, Feeds, Views and Panels modules to attach a large data sets to nodes without putting them into fields. Once you know how the pieces fit together, it will take you less time than me writing this blog post to import and display large amounts of data in a less flexible, but more performant way!
Gábor Hojtsy: On authority in Drupal and/or Open Source in general
I just had the time to watch Larry Garfield's DrupalCon Amsterdam core conversation on managing complexity today. I did not have the chance to attend his session live due to other obligations, but it is nonetheless a topic I am very interested in.
Code Karate: Drupal 7 Absolute Messages
In episode 174, we look at a new way to display administrative messages. In other words, absolute messages is a module that changes how status, error and warning messages are displayed. For the most part, this is a nominal improvement, but does allow for hiding and showing of messages.
Tags: DrupalMessagingDrupal 7Drupal PlanetSite AdministrationUI/DesignTriquanta Web Solutions: Automatically switch Drush versions per project
Now that Drush has become standard equipment in every developer's toolbox, and Drupal 8 is around the corner, you may find yourself asking "Which Drush version should I use?" While Drush 6 has a stable release, only Drush 7 can be used with Drupal 8. Usually, I use Drush 7. It works well with both Drupal 7 and Drupal 8, and even though is doesn't have a stable release yet, it feels pretty stable to me.
Combining Drush versions: the trouble beginsUnfortunately, when you use Drush 7 to run commands on a remote server which runs Drush 6, you will run into errors. For instance when doing a sql-sync:
$ drush sql-sync @mysite-prod @self You will destroy data in mysite and replace with data from example.com/mysite. Do you really want to continue? (y/n): y Starting to dump database on Source. [ok] Database dump saved to [success] /home/www-data/drush-backups/mysite/20141016113131/mysite_20141016_113132.sql.gz The Drush sql-dump command did not report the path to the dump file produced. Try upgrading the version of Drush you[error] are using on the source machine.Obviously Drush 7 doesn't like to talk to Drush 6. So how do we solve that?
Installing multiple Drush versions side-by-sideIt's not too hard to install two Drush versions side-by-side, and use aliases or symlinks to choose a version. On my system I installed Drush 7 using composer and I installed Drush 6 using the manual method.
Next I created two symlinks called "drush6" and "drush7" in a directory in your $PATH variable. I use ~/bin, but it depends on your OS and configuration.
$ cd ~/bin $ ln -s ~/drush-6.4.0/drush drush6 $ ln -s ~/.composer/vendor/drush/drush/drush drush7Using those symlinks, I can use both versions anywhere on my system:
$ drush6 --version Drush Version : 6.4.0 $ drush7 --version Drush Version : 7.0-devNow I can run drush6 sql-sync @mysite-prod @selfto choose Drush 6 and avoid problems syncing with a remote server.
Automating which version to useIt's nice to be able to choose, but wouldn't it be awesome if you can just run drush ...without having to think which version you need? If you're managing multiple sites on different servers, you don't want to spend your energy remembering which project requires which Drush version.
At Triquanta we use git repositories, one for each project. I want to be able to specify the default Drush version per project, so I will never run the wrong Drush version by mistake. That's where this really simple bash script comes in:
#!/bin/bash version=$(git config --get drush.version) if [ "$version" = '6' ]; then drush6 "$@" else drush7 "$@" fiSave it as "drush" in a directory in your $PATH variable, and make it executable. Now when you execute drush, it will call this script, which by default runs Drush 7.
$ drush --version Drush Version : 7.0-devWhen a project requires Drush 6 instead, I set a variable "drush.version" in the git working copy:
$ git config drush.version 6 $ drush --version Drush Version : 6.4.0That's all there is to it. Regardless where you are within your git-managed directory structure (the site root, /sites/default/files/, etc.) the script will always know which drush version to use.
Modules Unraveled: 122 The Drupal Security Team With Greg Knaddison and Michael Hess - Modules Unraveled Podcast
- What type of people are on the Drupal Security Team?
- https://security.drupal.org/team-members
- Mostly coders, some project managers, core maintainers
- What does the security team do?
- We fix issues in drupal
- Resolve reported security issues in a Security Advisory
- Provide assistance for contributed module maintainers in resolving security issues
- Provide documentation on how to write secure code
- Provide documentation on securing your site
- Help the infrastructure team to keep the drupal.org infrastructure secure
- What doesn’t the security team do
- projects without stable releases
- Site support
- Set policy around security with the security working group.
- Is there a D7 security team and a D8 security team with different people? (What about Drupal 6)
- How can others get involved?
- What was the recent bug that was fixed
- Paulius Pazdrazdys
How this latest security release is different from others? Do you have any information if this bug done any harm before release? - aboros
The recent bug was über critical, still only 20/25. What would be a 25/25 bug? - aboros
Do you notify any high value targets before SA is sent out? Is the list of those public? Can one be part of this privileged group? - Carie Fisher
When the latest bug was found? is there a private drupal security group where this was discussed? could we have found out sooner? - David Hernandez
What is the average time from discovery to announcement? - Damien McKenna
@ModsUnraveled Are there existing stats on how long it takes from initial reporting, to maintainer response, to first patch & fix? - Heine Deelstra
How was SA-CORE-005 (in hindsight) able to be public for so long in the public queue? - Mark Conroy
I think the #drupal security team are great. Working extremely hard. (I know, that wasn't a question) - aboros
Are there plans for some sort of bounty program run by DA maybe? - David Hernandez
What kind of work does the security team do besides review code? What is the administrative overhead?
Get Pantheon Blog: What We Are Seeing With Drupal SA 2014-005
It's been 24 hours since Drupal SA-CORE-2014-005 was announced, and we are already beginning to see attacks in the wild. As a platform with 10s of 1000s of Drupal sites, we have a unique perspective on the problem.
This is not a drill: black-hat scripters from sketchy domains are working through lists of known Drupal websites probing for exploits. If you have not patched all your sites, stop reading and do it right now.
...
Ok, now that your websites are safe, here's what we're seeing.
Profiling and Logging Suspected ExploitsWe learned of the vulnerability through our participation with the Drupal Security team, so we had a few days to prepare prior to the announcement. At that point, we were under obligation not to share details as part of responsible disclosure, but we did tweet and email customers to "be ready" for the update on Wednesday.
Beyond that, the first step was fashioning our own exploit to have something to build a defense against. I "owned" my personal blog several times getting this right.
With a sense of a potential attack signature, we developed platform-wide request filtering, WAF style. At our scale, we couldn't try to tweak every individual site: a platform solution was the only answer.
We got that deployed on Monday, giving us two days to see the results of real production traffic. We were able to eliminate false-positives while still detecting our PoC attacks, which gave us confidence that our filter would not impact legitimate traffic. That was an important moment, because it meant we could start locking things down.
Log and BlockWith the SA announcement on Wednesday we switched the filter from "log" to "log and block". The first detected (and blocked) attack came in at 22:42 UTC (3:42 PM PT), about seven hours after the security announcement. It attempted to set up a fake user with id 9999 and a suspicious temp email address from trbvm.com.
Over the rest of the day we saw a handfull (20-ish) more attacks that looked like proof of concepts or penetration tests. We saw attempts to re-use a proof of concept posted in a Reddit thread, an attempt to create a user named "morpheus" with a pre-set password, and a few attempts to make accounts with the email address test@test.com and then elevate them to an admin role.
It Gets RealEarly this morning at 08:23 UTC (1:23 AM PT) we started seeing an attack that attempts to insert a new item into the menu_router table. This attack is originating from IPs from a VPS provider in the .ru domain space, and it appears to be working through a list of domain names alphabetically.
The attack seems to be the initial part of a multi-step process. The menu_callback it is attempting to create will try to use file_put_contents() to drop a file somewhere in the codebase. That file will pick up a subsequent http request with more of an attack payload in the $_COOKIE superglobal. This sophistication plus the alphabetical attack sequence suggests a professional exploit.
Note that this attack has a 0% chance of success on Pantheon. We block it, but even if we didn't live sites can't write files into the codebase, and a sophisticated $_COOKIE attack would also be stripped. Still, it's concerning.
This Is Not A DrillIt's barely 24 hours after the SA, and we have logged and blocked over 500 attempted attacks on sites on the Pantheon platform. We expect this rate to increase as exploit code is more widely shared and attacks become more automated.
The fact that we are blocking suspect traffic does not mean you delay updating. We're happy to be defending sites on our Platform, but the filter, like CloudFlare's WAF firewall rule is not a guarantee to secure your site. You need to get the update deployed and patch the vulnerability at the source.
If you need help, let us know. If you have friends who need help, lend a hand.
CreditsCredit to the Drupal Security team for organizing a responsible and orderly release. There was likely temptation to rush something out once the severity was realized, but they showed great professionalism by taking a more deliberate route. As soon as the fix was disclosed, black-hats would start working to weaponize the exploit, which we are already seeing.
I'd also like to thank Leonardo Finetti for chiming in based on some tweets with additional information about the menu_router attack. He has his own post up (in Italian) here.
Finally, I'd like to give credit to Greg "greggles" Knaddison for planting the idea in my head of using the reach of our platform as a way to monitor exploit attempts against sites running on Pantheon. Hopefully the data we're able to gather will help everyone defend better and build more secure software and platforms.
Blog Categories: Engineering TweetAcquia: Shields Up!
Yesterday, the Drupal Security team announced that all Drupal 7 sites are highly vulnerable to attack. Acquia deployed a platform-wide "shield" which protects all our customer sites, while still keeping them 100% functional for visitors and content editors. These sites can now upgrade to 7.32 in a more calm, controlled timeline.
Acquia: 30 Awesome Drupal 8 API Functions you Should Already Know - Fredric Mitchell
Apart from presenting a terrific session that will help you wrap your head around developing for Drupal 8, Fredric and I had a great conversation that covered the use of Drupal and open source in government, government decision-making versus corporate decision-making, designing Drupal 7 sites with Drupal 8 in mind, designing sites for the end users and where the maximum business value comes from in your organization, and more!