Quantcast
Channel: Ask Puppet: Puppet DevOps Q&A Community - RSS feed
Viewing all 257 articles
Browse latest View live

Dynamically add Files to Hiera

$
0
0
Hi, We need to manage many keys in Hiera. Since many people should be able to edit the keys and in order to avoid a complete mess I was thinking to work with many different files. The problem is that I don't know how to make Hiera read from new files. I don't want to add each file to the hierarchy explicitly. Optimally I would add something like that `/etc/puppetlabs/code/enironments/%{::environment}/hieradata/delegated/*` and Hiera will just read from all files that are under the `delegated` folder. I wasn't able to find how to achieve my goal. What is the correct approach here? Thanks

How do I use hiera's calling_class_path pseudo variable from the command line?

$
0
0
Whenever I do any hiera troubleshooting, I do hiera lookups via hiera's cli utility, e.g.: $ hiera {key} ::environment=dev ::role=webserver --config /etc/puppetlabs/code/hiera.yaml However in hiera.yaml, I am also making use of the "calling_class_path" pseudo-variable: https://docs.puppet.com/hiera/3.1/puppet.html#special-pseudo-variables Therefore my hiera.yaml file looks like this: $ cat /etc/puppetlabs/code/hiera.yaml --- :backends: - yaml - eyaml :hierarchy: - "%{::role}" - "%{calling_class_path}" - common :yaml: :datadir: "/etc/puppetlabs/code/environments/%{::environment}/hieradata" In this scenario, does anyone know how to write my hiera command to feed in a value for this pseudo variable?

Having some issues/confusion regarding moving class definitions into hiera

$
0
0
Folks, I am having some confusion regarding 'moving' class definitions into hiera. I am using the razorsedge/snmp module, and I am able to use it successfully by specifiying the following as class defnitions: ##### # SNMP Configuration class profiles::os::linux::snmp { snmp::snmpv3_user { 'MYUSER': authpass => 'SomePassword', authtype => 'MD5', } class { 'snmp': snmpd_config => [ 'rouser MYUSER auth' ], } } It is my understanding that I can move this into hieradata, which would be ideal. However, it seems that the only portion of the above code that I can successfully implement in hiera is: snmp::snmpd_config: - 'rouser MYUSER auth' The /var/snmpd/snmpd.config file contains the new line after a puppet agent run. But, I cannot seem to implement the rest of it. If I do this: snmp::snmpv3_user: 'MYUSER`: authpass: 'SomePassword' authtype: 'MD5' it doesn't seem to work. I never get a new user named "MYUSER" So here are my questions: - Am I doing this correctly? - Is it a valid assumption that module defintions can always be defined in hiera instead? Thanks, Todd

Evaluation Error while using the Hiera hash

$
0
0
Hi, I have the following values in my hiera yaml file: test::config_php::php_modules : -'soap' -'mcrypt' -'pdo' -'mbstring' -'php-process' -'pecl-memcache' -'devel' -'php-gd' -'pear' -'mysql' -'xml' and following is my test class: class test::config_php ( $php_version, $php_modules = hiera_hash('php_modules', {}), $module_name, ){ class { 'php': version => $php_version, } $php_modules.each |String $php_module| { php::module { $php_module: } } } While running my puppet manifests I get the following error: Error: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash at /tmp/vagrant-puppet/modules-f38a037289f9864906c44863800dbacf/ssh/manifests/init.pp:46:3 on node testdays-1a.vagrant.loc.vag I am quite confused on what exactly am I doing wrong. My puppet version is 3.6.2 and I also have *parser = future* I would really appreciate any help here.

Overriding module parameter with hiera won't work on node level

$
0
0
I'm on Puppet 3.7.2 and Hiera 1.3.4 I'm using a basic ENC (hiera-enc) to create this hierarchy: :hierarchy: - "nodes/%{clientcert}" - "roles/%{role}" - "environments/%{environment}" - default I have no trouble passing classes and parameters on the role level. For example, one of my role looks like this: $ cat files/hiera/roles/agebdeb.yaml --- classes: - ssh ssh::ssh: false ... But when I try to override a parameter on the node level, puppet won't apply it even though it sees it: $ cat files/hiera/nodes/age1.bdeb.qc.ca.yaml --- classes: - ssh ssh::ssh: true ... Trying to compile to see if the nodes YAML files are read: $ puppet master --debug --compile age1.bdeb.qc.ca | grep hiera Debug: hiera(): Looking for data source nodes/age1.bdeb.qc.ca Debug: hiera(): Found classes in nodes/age1.bdeb.qc.ca Debug: hiera(): Looking for data source roles/agebdeb Debug: hiera(): Found classes in roles/agebdeb Debug: hiera(): Looking for data source environments/client Debug: hiera(): Looking for data source default Debug: hiera(): Looking up ssh::ssh in YAML backend Debug: hiera(): Looking for data source nodes/age1.bdeb.qc.ca Debug: hiera(): Found ssh::ssh in nodes/age1.bdeb.qc.ca What am I doing wrong?

Setting dnsmasq_config_file with Puppet managed OpenStack installation

$
0
0
Our OpenStack installation is rolled out with Puppet. We use the excellent `puppetlabs-openstack` module for that. Due to the slow turnover cycles we are still stuck with version `5.0.2`, right now we cannot afford to migrate to a newer version. So this question is related to OpenStack 2014.2.2. Our current network setup (GRE-tunneled) forces us to announce MTU of 1454 via DHCP to the guest VMs on our compute nodes. We are well aware that we can do that by providing the relevant configuration in `/etc/neutron/dnsmasq-neutron.conf` and specifying this in `/etc/neutron/dhcp_agent.ini`. The problem now is that we lack the proper Puppet knowledge to configure these parameters the "Puppet-way". The current configuration looks like this: (1) We use a file resource to create the `dnsmasq-neutron.conf` file in the appropriate location on our single network node. This obviously works very well and I believe we can keep it that way. file { 'dnsmasq-neutron.conf': name => '/etc/neutron/dnsmasq-neutron.conf', mode => '0644', owner => 'root', group => 'neutron', content => template('/etc/puppet/manifests/neutron/dnsmasq-neutron.erb') } (2) Currently we use the following really bad way of injecting the config line into the `dhcp_agent.ini` file. exec { 'dnsmasq_config-file': command => '/usr/bin/echo "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf" >> /etc/neutron/dhcp_agent.ini && /usr/sbin/service neutron-dhcp-agent restart', user => 'root', } Our first guess was to use Augeas which did not work as intended. And we do know that there must be a way to just set the `dnsmasq_config_file` property in a clean Puppet managed way. There is in fact a parameter for `class neutron::agents::dhcp` called `dnsmasq_config_file` which defaults to undefined. The question is: __How does one properly set this parameter?__ Our current node config for the network node can be found below. node 'network.lan' inherits basenode { class { '::openstack::role::network' : #dnsmasq_config_file => '/etc/neutron/dnsmasq-neutron.conf' } file { 'dnsmasq-neutron.conf': name => '/etc/neutron/dnsmasq-neutron.conf', mode => '0644', owner => 'root', group => 'neutron', content => template('/etc/puppet/manifests/neutron/dnsmasq-neutron.erb') } exec { 'dnsmasq_config-file': command => '/usr/bin/echo "dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf" >> /etc/neutron/dhcp_agent.ini && /usr/sbin/service neutron-dhcp-agent restart', user => 'root', } } The solution provided above works. However DHCP agents / dnsmasq restart twice with every Puppet run. Some additional resources such as our firewall settings were stripped from the above code because they would only clutter the example. Disclaimer: this is a [duplicate of a question already asked at ServerFault](http://serverfault.com/questions/774226/setting-dnsmasq-config-file-with-puppet-managed-openstack-installation). I encourage to also answer there to get some extra reputation ;)

Puppet master compile, Hiera error

$
0
0
trying to troubleshoot this error not sure where to look, one of my nodes a Debian box has correct certs, Im having issues running agent -t. Using Hiera for classifier, I double checked the yaml file for this node, can pull values from it using hiera command line, hiera classes ::hostname=node1 ["nginx","ntp"] when running puppet master --debug --compile node1 its getting the facts and running thru catalog compile, but getting this error towards end, Debug: hiera(): Hiera YAML backend starting Debug: hiera(): Looking up classes in YAML backend Debug: hiera(): Looking for data source node/node1 Debug: hiera(): Found classes in node/node1 Debug: hiera(): Looking for data source Debian Error: Evaluation Error: Error while evaluating a Function Call, (): could not find expected ':' while scanning a simple key at line 3 column 1 at /etc/puppetlabs/code/environments/production/manifests/site.pp:38:1 on node node1 Error: Evaluation Error: Error while evaluating a Function Call, (): could not find expected ':' while scanning a simple key at line 3 column 1 at /etc/puppetlabs/code/environments/production/manifests/site.pp:38:1 on node node1 Error: Failed to compile catalog for node node1: Evaluation Error: Error while evaluating a Function Call, (): could not find expected ':' while scanning a simple key at line 3 column 1 at /etc/puppetlabs/code/environments/production/manifests/site.pp:38:1 on node node1 I copied a working YAML file from another node to this node, the other node runs fine on compile, but something about this one causes the error. Also tried cleaning the cert and re-adding, but same error on compile debug run.

Does Puppet provide a validation/syntax checker for Hiera?

$
0
0
The tool `puppet parser validate` will [check the syntax of my Puppet manifests](http://docs.puppetlabs.com/references/latest/man/parser.html#ACTIONS): [root@puppet3 ~]# puppet parser validate /etc/puppet/manifests/site.pp Warning: The use of 'import' is deprecated at /etc/puppet/manifests/site.pp:18. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') Error: Could not parse for environment production: No file(s) found for import of 'nodes/*.pp' at /etc/puppet/manifests/site.pp:18 [root@puppet3 ~]# And I can check my manifests against the Puppet Style Guide using `puppet-lint`: [root@puppet3 ~]# puppet-lint /etc/puppet/modules/hosts/manifests/init.pp WARNING: class not documented on line 1 [root@puppet3 ~]# Is there a Puppet validation tool for Hiera or for my YAML files?

statsd module cause hiera error

$
0
0
HI, I installed statsd module by 'puppet module instal.. ' than I used it like this in a wrapper: class { 'statsd': backends => ['./backends/graphite'], graphiteHost => "graphite-relay.dev.sizmdx.com", graphite_globalPrefix => 'statsite', graphite_legacyNamespace => false, stackdriver_sendTimerPercentiles => false, configfile => '/etc/statsd/localConfig.js' } after that I added to my node.pp file class { 'monitor': } when i run puppet agent on the agent I get hiera error: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Error from DataBinding 'hiera' while looking up 'statsd::librato_skipInternalMetrics': undefined method `to_sym' for []:Array on node beadmin-dev0-lior-cm-3f4gxw0f.sizmek Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run 1) should I use include monitor instead ? 2) How come that hiera doesn't find the parameters on the module class? any idea would be appreciated. puppet version: 3.8.7 Ruby version: 1.8.7 hiera version: 1.3.4

How to upgrade hiera on puppet 3.8.4

$
0
0
Hi, I want to upgrade hiera from 1.3.4 to 3.x, how can I do that ? is it safe ? thanks

"hiera_include() has been converted to 4x API" after updating to 4.4.2

$
0
0
Hello, I've recently updated Puppet from 4.4.1 to 4.4.2 and to my suprise I was greeted with: Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Evaluation Error: Error while evaluating a Function Call, hiera_include() has been converted to 4x API at /etc/puppetlabs/code/environments/production/site.pp:1:1 on node (...) Contents of site.pp, just one line: hiera_include('classes') I didn't see any changes in current documentation, and it looks like hiera_include is loaded from lib/puppet/parser/functions (Puppet 3) and not lib/puppet/functions/ (Puppet 4). Any ideas?

Hiera facter interpolation not working with puppetdb on key name

$
0
0
Having the following yaml file: profile::monitoring::satellite_poller::zones: 'zone01-master': 'global': false 'zone01': 'endpoints': "%{::fqdn}": 'host': "%{::ipaddress}" 'port': '5665' 'parent': 'zone01-master' 'global': false When using a local puppet apply, the %{::fqdn} gets interpolated into the host fqdn. When running with puppet agent, connected to a puppetmaster with puppetdb backend, the %{::fqdn} stays literal "%{::fqdn}" but %{::ipaddress} gets the ip value. running a hiera lookup on the puppetmaster fetches the right info hiera profile::monitoring::satellite_poller::zones --config /etc/puppet/hiera.yaml ::fqdn=test ::ipaddress=127.0.0.1 {"zone01-master"=>{"global"=>false}, "zone01"=> {"parent"=>"zone01-master", "global"=>false, "endpoints"=>{"test"=>{"port"=>"5665", "host"=>"127.0.0.1"}}}} According to https://docs.puppet.com/hiera/1/variables.html It is valid since it's not the root key that is dynamic. running Puppetmaster 3.6.1 , hiera 1.3.4

How to reference Hiera data based on group classification in Puppet Enterprise?

$
0
0
We are deploying Puppet Enterprise, and are loosely following the "roles and profiles" module pattern- except we are using the Enterprise Console's node classifier in place of a "roles" module. What we are trying to do now, is come up with a means of providing group-specific overrides within Hiera- but using PE Console as our classifier. In other words, nodes that are in XYZ classification group (or 'role') in the Puppet Console, would pull their hiera data from "roles/XYZ.yaml" first. Our Hiera tree would look something like: :hierarchy: - "nodes/%{::trusted.certname}" - "roles/${role}" - "%{facts.osfamily}_%{facts.os.release.major}" - common I have seen design patterns that involve assigning 'role' as an external fact on the node, and then referencing it in Hiera... but that seems to introduce its own complexities. We could have a set of classes that pushes the needed facts, and then assign those classes to the relevant classification groups in PE Console... but then we get a chicken-and-egg situation where a new node would need multiple puppet runs before Hiera can pick up on the custom facts. We could push the facts ourselves when we first provision the server, but I'm trying to avoid manual hackery as much as possible during the provisioning process. I was wondering if there was a more graceful way to give Hiera the group-specific info it needs from PE Console Group assignments, rather than working around the PE Console to assign custom facts first?

"" is not a Hash. It looks to be a String

$
0
0
I have **init.pp** as below: class mymodule { include mymodule::users include mymodule::install include mymodule::config include mymodule::service Class[mymodule::users] -> Class[mymodule::install] -> Class[mymodule::config] -> Class[mymodule::service] } I have **install.pp** as below: class mymodule::install ( $artifacts = undef, ) inherits mymodule::params { validate_hash($artifacts) create_resources('different_module::artifact', $artifacts) } And I am creating specs for these classes as below: **init_spec.pp** require 'spec_helper' require 'hiera' describe 'mymodule', :type => :class do let(:pre_condition) { ' include mymodule::install include mymodule::users include mymodule::service include mymodule::config ' } it { should contain_class('mymodule') } end **install_spec.pp** require 'spec_helper' require 'hiera' describe 'mymodule::install', :type => :class do let(:pre_condition) {'include different_module' } let(:hiera_config) { 'spec/fixtures/hiera/hiera.yaml' } hiera = Hiera.new(:config => 'spec/fixtures/hiera/hiera.yaml') artifacts = hiera.lookup('mymodule::install::artifacts', nil, nil) let(:params) { { :artifacts => artifacts } } it { should contain_class('mymodule::install') } it { should contain_different_module__artifact('mymodule') } } **spec/fixtures/hiera/hiera.yaml** --- :backends: - yaml :yaml: :datadir: spec/fixtures/hieradata :hierarchy: - common **spec/fixtures/hieradata/common.yaml** mymodule::install::artifacts: mymodule: groupid: 'au.com.org.app' artifactid: 'mymodule' version: 'r1.0' type: 'tgz' destination: '/tmp/mymodule_r1.0.tar.gz' timeout: '0' When I run rspec I am getting the below error: "" is not a Hash. It looks to be a String at /tmp/mymodule/spec/fixtures/modules/mymodule/manifests/install.pp:6 EDIT: When I add Alex's fail statement before `validate_hash` all the specs have been failed. Interestingly, I can see the below, only for `should contain_class`, it is empty: 1) mymodule should contain Class[mymodule] Failure/Error: it { should contain_class('mymodule') } Puppet::Error: I got for artifacts at /tmp/mymodule/spec/fixtures/modules/mymodule/manifests/install.pp:6 on node debian-vm.localdomain And for other specs I can see the value of artifacts as below: 3) mymodule::install should contain Class[mymodule::install] Failure/Error: it { should contain_class('mymodule::install') } Puppet::Error: I got {"mymodule"=>{"groupid"=>"au.com.env.app", "artifactid"=>"mymodule", "version"=>"r4", "type"=>"tgz", "destination"=>"/tmp/mymodule_r49.tar.gz", "timeout"=>"0"}} for artifacts at /tmp/mymodule_r4/spec/fixtures/modules/mymodule_r4/manifests/install.pp:6 on node debian-vm.localdomain

Is it possible to have conditionals in hiera yaml?

$
0
0
I am trying to achieve following. - I have certain variables defined in hiera yaml for a.yaml - Some of the variables I want to read from the external facts. - However whenever an external fact is not defined I want to use some default. So for example --- key1: value1 key2: %{value2_from_facts} | value2 In this case if custom fact 'value2_from_facts' is not defined, I want key2 to have value2. Is this possible? Is there any better alternative to having defaults assigned to hiera variable if the facts used are not resolved?

distribute custom fact built from hiera data with plugin-sync

$
0
0
Hello I need a custom fact to access resources which require user credentials unique to each agent. For contractual reasons I can not use the "normal" way of pushing a custom fact file and then do another run to read it (I also had fun with /var/lib/puppet for the same reason). What I'd like to do is have the plugin-sync mechanics evaluate a templated ruby fact on the master and populate it with the credentials based on the node name (puppet does know it, it is talking to it). A) Any idea where to look? I could not find the point where custom facts are distributed. B) Anybody else interested in this? I'd like to make a feature request. **UPDATE 01:** The credentials are stored in a hiera backend. --- my_module::access: nodename00: [ ] nodename01: [ ] The requirements for the agents explicitly state that management credentials must not be permanently stored on agents (don't ask, already tried; btw this makes things like the forge mysql module unusable for me). So the usual answer with two puppet runs is not an option. I already had to implement clean-up for the leftovers in /var/lib/puppet, so the fact will be gone after the run which deployed and used it.

Hiera lookup on array of hashes, is it possible?

$
0
0
Hello, I am not sure if this is possible at all, but let me introduce with challenge I am experiencing. I have this configuration: cloud::provision: 'strname': key1: str1 key2: str2 sg_key: - protocol: 'tcp' port: '1111' cidr: '172.31.0.0/16' - protocol: 'tcp' port: '2222' cidr: '172.31.0.0/16' - protocol: 'tcp' port: '3333' cidr: '172.31.0.0/16' So, I have a lot of these rules (array of hashes) for every security group for every EC2 instance on AWS. I want to simplify whole process and make a lookup like this: *common.yaml* lookupkey: - protocol: 'tcp' port: '1111' cidr: '172.31.0.0/16' - protocol: 'tcp' port: '2222' cidr: '172.31.0.0/16' - protocol: 'tcp' port: '3333' cidr: '172.31.0.0/16' and have this: cloud::provision: 'strname': key1: str1 key2: str2 sg_key: "%{hiera('lookupkey')}" Doing a lookup from CLI (first example), I am getting this response: {"strname"=> {"key1"=>"str1", "key2"=>"str2", "sg_key"=> [{"protocol"=>"tcp", "port"=>"1111", "cidr"=>"172.31.0.0/16"}, {"protocol"=>"tcp", "port"=>"2222", "cidr"=>"172.31.0.0/16"}, {"protocol"=>"tcp", "port"=>"3333", "cidr"=>"172.31.0.0/16"}]}} If I try to do a lookup in second example (with hiera lookup), I am getting this: {"strname"=> {"key1"=>"str1", "key2"=>"str2", "sg_key"=> "[{\"protocol\"=>\"tcp\", \"port\"=>\"1111\", \"cidr\"=>\"172.31.0.0/16\"}, {\"protocol\"=>\"tcp\", \"port\"=>\"2222\", \"cidr\"=>\"172.31.0.0/16\"}, {\"protocol\"=>\"tcp\", \"port\"=>\"3333\", \"cidr\"=>\"'172.31.0.0/16\"}]"}} Is it possible to get non-escaped output? Is this possible at all? It's very uncomfortable to have over 300 instances and have to change SG rules one by one instead on the one place. Thanks.

how to change verify_server_cert value in hiera for ldap module for apache

$
0
0
I'm trying to figure out how to change the verify_server_cert value in hiera and i cannot figure it out. i'm trying to set LDAPTrustedGlobalCert to off in my config. verify_server_cert seems to be the parameter to do it but i just can't fathom it. i've tried: apache::mod::authnz_ldap::verify_server_cert : false in hiera i've even tried explicitly loading the module in my profile with that parameter set but it throws an error saying it's already declared. Just wondered if anyone had any ideas.

hiera doesn't work

$
0
0
Hi all! I have puppetmaster 4 + puppetdb 4.1 and i try to use hiera... In puppet master: puppet nodes # cat /etc/puppetlabs/code/hiera.yaml --- :backends: - yaml :yaml: :datadir: "/etc/puppetlabs/code/environments/%{environment}/hieradata" :hierarchy: - "nodes/%{::trusted.certname}" - "nodes/%{::hostname}" - common puppet nodes # cat /etc/puppetlabs/code/environments/production/hieradata/nodes/puppetdb.yaml # /etc/puppetlabs/code/production/hieradata/web01.example.com.yaml --- ntp::restrict: - ntp::autoupdate: false ntp::enable: true ntp::servers: - 0.us.pool.ntp.org iburst - 1.us.pool.ntp.org iburst - 2.us.pool.ntp.org iburst - 3.us.pool.ntp.org iburst Why if i run in puppetdb **puppet agent -t** Info: Using configured environment 'production' Info: Retrieving pluginfacts Info: Retrieving plugin Info: Loading facts Info: Caching catalog for puppetdb Info: Applying configuration version '1464975833' Notice: Applied catalog in 0.03 seconds Nothing happens... Maybe i must do anything else?

configuring passenger from hiera with puppetlabs-apache module

$
0
0
I'm looking to setup a Passenger app using the puppetlabs-module, configuring via hiera. If I use the following vhost definition it works fine: web::vhosts: node.com: docroot: /var/www/app serveraliases: "%{fqdn}" passenger_app_env: production passenger_pre_start: "http://%{fqdn}/contact" passenger_min_instances: 3 directories: - path: /var/www/app passenger_enabled: "on" but if I try to define the pool_idle_time for passenger with the following config web::vhosts: node.com: docroot: /var/www/app serveraliases: "%{fqdn}" passenger_app_env: production passenger_pre_start: "http://%{fqdn}/contact" passenger_min_instances: 3 passenger_pool_idle_time: 0 directories: - path: /var/www/app passenger_enabled: "on" I get an error > Error: Could not retrieve catalog from> remote server: Error 400 on SERVER:> Invalid parameter> passenger_pool_idle_time on node> abc.co.uk but from looking at the templates/mod/passenger.conf.erb it looks as if that is an option that can be configured.
Viewing all 257 articles
Browse latest View live