<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Jumping Bean Blogs</title>
  <link rel="self" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932" />
  <subtitle>Jumping Bean Blogs</subtitle>
  <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932</id>
  <updated>2026-03-13T14:05:17Z</updated>
  <dc:date>2026-03-13T14:05:17Z</dc:date>
  <entry>
    <title>Alfresco Share Email Documents Action</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14970600" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14970600</id>
    <updated>2025-05-10T11:54:24Z</updated>
    <published>2025-05-10T11:24:00Z</published>
    <summary type="html">&lt;div class="container"&gt;
&lt;div class="justify-content-center row"&gt;
&lt;div class="col-lg-8"&gt;
&lt;div class="card"&gt;
&lt;div class="card-body"&gt;
&lt;p&gt;When we moved from Drupal to Liferay back in 2020, sadly we didn't have time to migrate our old blogs to the new platform. In our logs, we still see hits for these old URLs, and we have decided to make a concerted effort to resurrect these old blogs for those who find them useful. For most of the popular blogs, we are managing to get the original article content back and repost, predominantly unmodified except for URLs and other minor changes for the migration. So please bear in mind the content may now be a little dated, especially since the articles were already old when we migrated.&lt;/p&gt;

&lt;p&gt;Unfortunately we cannot recover all of the blog posts. This is one of them. The post on the time when we extended Alfresco Share to add an action to be able to directly email documents from Alfresco Share, rather than having to download them and then attach them to emails, is one of them. Although we still use Alfresco, Alfresco Share is now deprecated but still supported, and effort is being put into the Alfresco Developer Framework. Can't say I will miss Surf.&lt;br&gt;
&lt;br&gt;
Contact Us if you need any &lt;a href="https://jumpingbean.co.za/w/we-build/java-software-development" rel="noopener noreferrer" target="_blank"&gt;Alfresco development work or consulting&lt;/a&gt;.&lt;/p&gt;

&lt;div class="alert alert-warning" role="alert"&gt;
&lt;p&gt;Anyway, for those looking for this article, all we can currently provide is the link to the GitHub repo. We hope you find it useful.&lt;/p&gt;
&lt;/div&gt;

&lt;div class="github-link"&gt;&lt;a class="btn btn-primary" href="https://github.com/mxc/alfresco-emaildocuments-share" rel="noopener noreferrer" target="_blank"&gt;Alfresco Email Documents Share (GitHub) &lt;/a&gt;&lt;/div&gt;

&lt;p class="text-muted"&gt;Published: [Original Publication Date Unknown, Likely Prior to 2020]&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-05-10T11:24:00Z</dc:date>
  </entry>
  <entry>
    <title>JPA 2 Criteria API Tutorial</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14565851" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14565851</id>
    <updated>2025-04-24T05:37:06Z</updated>
    <published>2025-04-24T05:06:00Z</published>
    <summary type="html">&lt;p class="submitted"&gt;Submitted by Mark Clarke on Sun, 07/10/2011 - 14:00&lt;/p&gt;

&lt;p&gt;When JPA 2 was released in 2009 it included the new criteria API. The purpose of the API was to get away from using JQL strings , (JPA Query Language), in your code. Although JQL seems like a great way to leverage your existing SQL knowledge ,in the OO world it has a major drawback namely; there is no compile time checking of your query strings. The first time you find out about a spelling or syntactical error in your query string is at run time. This can be quiet a productivity drain with developers having to correct, compile and redeploy to continue.&lt;/p&gt;

&lt;p&gt;Unit testing your code goes some way to addressing this problem but one area that cannot be addressed by unit tests is refactoring. Most refactoring tools battle with strings and you are stuck with rerunning unit tests and correcting each string that slip through the manual changes on each iteration of the test until all is well. Now with the JPA criteria API it's possible to have type safe queries that are checked at compile time and refactoring is much more efficient!&lt;/p&gt;

&lt;div class="card mb-lg-3 ml-lg-3 mt-lg-3 pl-lg-5 pr-lg-5 text-center text-dark"&gt;
&lt;div class="card-body"&gt;&lt;strong&gt;Get Expert Java Training&lt;/strong&gt;

&lt;p class="card-text"&gt;&lt;a href="https://java-training.net/courses" rel="noopener noreferrer" target="_blank"&gt;Java: Elevate your programming with our fundamental Java Training courses.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://springtraining.co.za/courses" rel="noopener noreferrer" target="_blank"&gt;Spring:&amp;nbsp;Unlock enterprise Java skills with our focused Spring Framework training&lt;/a&gt;&lt;/p&gt;
&lt;a class="btn btn-primary" href="https://jumpingbean.co.za/about/#contactus" target="_blank"&gt;Contact Us&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;JPA Criteria API&lt;/h2&gt;

&lt;p&gt;There is a price to pay for this static checking though. First you need to generate a set of meta model classes, &lt;a href="#metamodel"&gt;more on what these are is given below&lt;/a&gt;, that describe the fields of your entities; luckily this is easily achieved by placing a step in your maven build process. The second price you pay is verbosity and a less than intuitive-at-first-glance api.&lt;/p&gt;

&lt;h3&gt;Generating the JPA Criteria Meta Model Classes&lt;/h3&gt;

&lt;p&gt;To generate the meta model classes during your build process add the following plugin details to your maven pom.xml. In my case, because I have a different persistence unit for our unit tests I had to specify which persistence unit I wanted built, otherwise a bug causes the plugin to throw an error when it finds an entity listed in two persistence units. (i.e the production config and test unit config). I make use of Eclipselink in the pom.xml below.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;plugin&amp;gt;
    &amp;lt;groupId&amp;gt;org.apache.maven.plugins&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;maven-source-plugin&amp;lt;/artifactId&amp;gt;
&amp;lt;/plugin&amp;gt;
&amp;lt;plugin&amp;gt;
    &amp;lt;groupId&amp;gt;org.bsc.maven&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;maven-processor-plugin&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;1.3.5&amp;lt;/version&amp;gt;
    &amp;lt;executions&amp;gt;
        &amp;lt;execution&amp;gt;
            &amp;lt;id&amp;gt;process&amp;lt;/id&amp;gt;
            &amp;lt;goals&amp;gt;
                &amp;lt;goal&amp;gt;process&amp;lt;/goal&amp;gt;
            &amp;lt;/goals&amp;gt;
            &amp;lt;phase&amp;gt;generate-sources&amp;lt;/phase&amp;gt;
            &amp;lt;configuration&amp;gt;
                &amp;lt;compilerArguments&amp;gt;-Aeclipselink.persistencexml=src/main/resources/META-INF/persistence.xml -Aeclipselink.persistenceunits=mypu&amp;lt;/compilerArguments&amp;gt;
                &amp;lt;processors&amp;gt;
                    &amp;lt;processor&amp;gt;org.eclipse.persistence.internal.jpa.modelgen.CanonicalModelProcessor&amp;lt;/processor&amp;gt;
                &amp;lt;/processors&amp;gt;
            &amp;lt;/configuration&amp;gt;
        &amp;lt;/execution&amp;gt;
    &amp;lt;/executions&amp;gt;
&amp;lt;/plugin&amp;gt;&lt;/code&gt;
&lt;/pre&gt;

&lt;p&gt;Now, at compile time, the set of meta model classes should be automatically generated for you.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;h2&gt;How to use the JPA Critiera API&lt;/h2&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The criteria api can seem quiet daunting at first but it's not that bad once you grok its basic design approach. There are two main object that you will use to create your query, namely the CriteriaBuilder object and a CriteriaQuery object. The first step is to get a handle to a CriteriaBuilder object and then create a CriteriaQuery object. This is is done with the following boiler plate code, where em is an EntityManager object.&lt;/p&gt;

&lt;p&gt;CriteriaBuilder cb = em.getCriteriaBuilder();&lt;br&gt;
CriteriaQuery cqry = em.createQuery();&lt;/p&gt;

&lt;p&gt;From the CriteriaQuery object you will create the four "components" of a query namely:&lt;/p&gt;

&lt;p&gt;Criteria Components&lt;/p&gt;

&lt;table border="1" cellpadding="1" cellspacing="1" style="width:500px"&gt;
	&lt;thead&gt;
		&lt;tr&gt;
			&lt;th scope="col"&gt;Component&lt;/th&gt;
			&lt;th scope="col"&gt;Description&lt;/th&gt;
			&lt;th scope="col"&gt;How to create&lt;/th&gt;
		&lt;/tr&gt;
	&lt;/thead&gt;
	&lt;tbody&gt;
		&lt;tr&gt;
			&lt;td&gt;Select&lt;/td&gt;
			&lt;td&gt;The objects or object fields you wish to return. Pretty much like the select part of a JQL query. You can do aggregations here too and return fields from objects but in this brief tutorial we will just select objects.&lt;/td&gt;
			&lt;td&gt;CriteriaQuery.select&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;From&lt;/td&gt;
			&lt;td&gt;This is where we stipulate which entity (table) we are quering. You can also follow relationships to other entities that are part of the root entity i.e "joins" and also subSelects.&lt;/td&gt;
			&lt;td&gt;CriteriaQuery.from&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;Where&lt;/td&gt;
			&lt;td&gt;This is the where you stipulate the criteria you wish to apply to the entities that you are selecting. You create "Predicates" which are used to build up the "where" clause.&lt;/td&gt;
			&lt;td&gt;
			&lt;p&gt;CriteriaQuery.where&lt;/p&gt;

			&lt;p&gt;CriteriaBuilder.equals&lt;br&gt;
			CriteriaBuilder.lessThanOEquals&lt;br&gt;
			etc.&lt;/p&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;Order By&lt;/td&gt;
			&lt;td&gt;If you wish to stipulate the order object should be returned in, you do it here&lt;/td&gt;
			&lt;td&gt;CriteriaQuery.orderBy&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;Group By&lt;/td&gt;
			&lt;td&gt;For aggregations you stipulate the object fields or objects here.&lt;/td&gt;
			&lt;td&gt;CriteriaQuery.groupBy&lt;/td&gt;
		&lt;/tr&gt;
	&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In practise nearly every method from the CriteriaQuery and CriteriaBuilder takes an object that implements the interface java.persistence.criteria.Expression, Selection or Predicate, which can be very unhelpful when deciding what to send into the method for a beginner; especially since a lot of the objects implement both interfaces and the Expression interface inherits from the Selection interface! (Does that make your brain hurt? I am sure the API desinger must have had an anuerism).&lt;/p&gt;

&lt;p&gt;It really doesn't make sense that a Path object can be sent to the where method. But it best not to pay too much attention to these high level interfaces and understand the process of creating a CritieriaQuery, at least in the beginning.&lt;/p&gt;

&lt;p&gt;The simplest approach to understanding the API is to separate out the task of creating a query into the following steps:&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;Create your CriteriaBuilder and CriteriaQuery objects,&lt;/li&gt;
	&lt;li&gt;Setup your from clause,&lt;/li&gt;
	&lt;li&gt;Setup your select clause,&lt;/li&gt;
	&lt;li&gt;Setup your criterias or predicates&lt;/li&gt;
	&lt;li&gt;Setup the where clause using your predicates&lt;/li&gt;
	&lt;li&gt;Execute the query.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;A Simple CriteriaQuery Example&lt;/h2&gt;

&lt;p&gt;Lets assume we have a simple object called MyEntity which has the following fields:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;dateCreated - Date, and&lt;/li&gt;
	&lt;li&gt;age - Integer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is a simple CriteiraQuery example.&lt;/p&gt;

&lt;pre&gt;             //Boilerplate

             CriteriaBuilder cb = em.getCriteriaBuilder(); //Step 1
             CriteriaQuery cqry = em.createQuery(); //Step 1

             //Interesting stuff happens here

             Root&amp;lt;MyEntity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
             cqry.select(root); Step 3
             
             //Boilerplate code
             Query qry = em.createQuery(cqry); //Step 6
             List&amp;lt;MyEnity&amp;gt; results = qry.getResultList(); //Step 6
&lt;/pre&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;So we have two boilerplate sections that you need to type up, section 1 and section 6. Section 1 creates the criteria objects you need and the section 6 , at the end, executes the query.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;The main work happens in the middle, steps 2 - 5, where you construct your query. The simple example above will return all entities in the table mapped to the MyEntity class. We had to tell the query which entity we were selecting from with the "from" method, and then what we wanted returned from the query with the "select" method.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Using our step-by-step approach we can begin to build more complicated queries. Lets say we want to use some criteria in the query. To do this we need to populate the "where" method of the CriteiraQuery with "Predicate" objects.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;A predicate object is usually created by using one of the methods on the CritieraBuilder class, although the Expression interface also creates some Predicates, but for now just think of using the CriteriaBuilder class for this task. So lets say we want to find all MyEntity objects with age &amp;gt; 10.&lt;/p&gt;

&lt;pre&gt;            Root&amp;lt;MyEntity&amp;gt; root = cqry.from(MyEntity.class); //Step 2

            cqry.select(root); //Step 3
            Predicate pGtAge = cb.gt(root.get("age"),10); //Step 4
            cqry.where(pGtAge); //Step 5
&lt;/pre&gt;

&lt;p&gt;When creating a predicate we have to tell the API which field of which object we are comparing against. We may have more than one entity that we are quering across, as with a join for example, so you need a reference to the entity being queried.&lt;/p&gt;

&lt;p&gt;The object returned by the "from" method above is what we use here. We then use the "get" method to stipulate which field from the entity object we want to compare against. Yes. the API is sadly very verbose.&lt;/p&gt;

&lt;p&gt;If we wanted to string more than one criteria together , lets say age &amp;gt;10 and dateCreated&amp;gt;"2011-07-01" then we would create two Predicates and "and" them as follows:&lt;/p&gt;

&lt;pre&gt;       //assume we have created a date object for 2011-07-01 
       //called date
 
             Root&amp;lt;MyEnity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
             cqry.select(root); //Step 3
             Predicate pGtAge = cb.gt(root.get("age"),10); //Step 4
             Predicate pGtDateCreated=
                      cb.greaterThan(root.get("dateCreated"),date); //Step 4
             Predicate pAnd = cb.and(pGtDateCreated,pGtAge); //Step 4
 

             cqry.where(pAnd); //Step 5
    
&lt;/pre&gt;

&lt;h2&gt;&lt;a id="metamodel" name="metamodel"&gt;&lt;/a&gt;Using Meta Model Classes in Query&lt;/h2&gt;

&lt;p&gt;Using the many methods on CriteriaBuilder you can build up sophisticated "where" clauses. Use the autocomplete on your IDE to see what methods are available or check out the API docs.&lt;/p&gt;

&lt;p&gt;You may be wondering why we are using strings in our "Predicate" objects. Afterall doesn't this defeat the purpose of not using JQL? Are CriteriaQueries subject to the same failings then as JQL queries that we setout in the introduction?&lt;/p&gt;

&lt;p&gt;The answer to that question is to use the Meta Model Classes that we created in the beginning of this tutorial. Once the classes have been generated you can refer to fields in Entity objects using the meta model classes. The meta model class has the same name as your Entity class with an underscore(_) appeneded. So to say we are comparing against the age field we would use MyEntity_.age. The query above is rewritten below using the meta model classes.&lt;/p&gt;

&lt;div&gt;
&lt;pre&gt;            //assume we have created a date object for 2011-07-01 
            //called date
             Root&amp;lt;MyEnity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
             cqry.select(root); //Step 3
             Predicate pGtAge = cb.gt(root.get(MyEntity_.age),10); //Step 4
             Predicate pGtDateCreated=
               cb.greaterThan(root.get(MyEntity_.dateCreated),date); //Step 4
             Predicate pAnd = cb.and(pGtDateCreated,pGtAge); //Step 4
             cqry.where(pAnd); //Step 5
&lt;/pre&gt;

&lt;h2&gt;Criteria Query Using Joins&lt;/h2&gt;

&lt;p&gt;So what is we want to query across entities i.e do a "join" query? Lets say we have another Entity object called AnotherEntity with fields as follows:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;name - String&lt;/li&gt;
	&lt;li&gt;enabled - boolean&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition we extend the MyEntity object to have a reference to AnotherEntity object. So MyEntity becomes:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;dateCreated - Date,&lt;/li&gt;
	&lt;li&gt;age - Integer and&lt;/li&gt;
	&lt;li&gt;anotherEntity - AnotherEntity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now lets say we want to query for all MyEntity objects that have an AnotherEntity object which are disabled. We would do this as follows:&lt;/p&gt;

&lt;pre&gt;            Root&amp;lt;MyEnity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
            Join&amp;lt;MyEntity,AnotherEntity&amp;gt; join =
                             root.join(MyEntity_.anotherEntity); //Step 2
            //Join&amp;lt;MyEntity,AnotherEntity&amp;gt; join =
                             root.join("anotherEntity"); //Step 2
 


            cqry.select(root); //Step 3
            Predicate pGtAge = cb.gt(root.get(MyEntity_.age),10); //Step 4
            Predicate pGtDateCreated=
                             cb.greaterThan(root.get(MyEntity_.dateCreated),date); //Step 4
            Predicate pEqEnabled = cb.equals(join.get(AnotherEntity_.enabled),false);
            Predicate pAnd = cb.and(pGtDateCreated,pGtAge,pEqEnabled); //Step 4
             cqry.where(pAnd); //Step 5
&lt;/pre&gt;

&lt;p&gt;You can see that our form step is getting more complex as we stipulate what to join on. As with the Predicates one can use string or the meta model classses to stipulate the desired field to join on. If you going through the hassle of using the criteria API then there really is no point in using strings!&lt;/p&gt;

&lt;h2&gt;More Complex Select Clause&lt;/h2&gt;

&lt;p&gt;We will now look at more complex select clauses, our step 2. Lets say we are only interested in the dateCreated field of our MyEntity object. Our code would look something like:&lt;/p&gt;

&lt;pre&gt;                 Root&amp;lt;MyEnity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
             cqry.select(root.get(MyEntity_.dateCreated)); //Step 3
&lt;/pre&gt;

&lt;p&gt;If we wanted to use an aggregate function and get the minimum dateCreated we could use something like:&lt;/p&gt;

&lt;pre&gt;             Root&amp;lt;MyEnity&amp;gt; root = cqry.from(MyEntity.class); //Step 2
             Expression min =
                      cb.min(root.get(MyEntity_.dateCreated));//Step3
             cqry.select(min); //Step 3
&lt;/pre&gt;

&lt;p&gt;There are method for returning array of objects or tuples from a query and also a way to create new object on the fly from fields or queries entities. I may cover these in a future tutorial but for now this should be enough to get you going on the Criteria API.&lt;/p&gt;

&lt;p&gt;As you start using the Criteria API from JPA2 you will see that there are other ways of creating or manipulating some of the "components" we set out above other than following the step-by-step approach, but if you ever get lost just follow the steps.&lt;/p&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-24T05:06:00Z</dc:date>
  </entry>
  <entry>
    <title>IPv6 - Set Up An IPv6 LAN with Linux</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14412046" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14412046</id>
    <updated>2025-04-22T06:00:24Z</updated>
    <published>2025-04-21T04:49:00Z</published>
    <summary type="html">&lt;h1&gt;IPv6 - Set Up An IPv6 LAN with Linux&lt;/h1&gt;

&lt;p&gt;Submitted by Mark Clarke on Sun, 04/05/2015 - 18:05&lt;br&gt;
This post has been recovered from our old blog. A lot has changed since&amp;nbsp; 2015.&lt;/p&gt;

&lt;div&gt;
&lt;div id="md1"&gt;
&lt;p&gt;Setting up an&lt;strong&gt; IPv6 LAN with Linux&lt;/strong&gt;? Ever wondered how to do that? For years we have heard the dire predictions about the impending doom of IPv4 and the imminent arrival of IPv6. As with any eschatological prediction you either choose to ignore it and hope for the best, or you prepare for the event as best one can. So far the former strategy has served many sys admins well and proved to be an effective strategy .&lt;/p&gt;

&lt;p&gt;If however, you have decided to gird your loins and face IPv6 head on, you probably quickly discovered, that, although there is a lot out there about the theory of IPv6, there is very little in the way of practical how tos when it comes to setting up an IPv6 LAN.&lt;/p&gt;

&lt;p&gt;What makes understanding IPv6 troublesome is the complexity of working in a mixed environment of IPv4 and IPv6. This complexity becomes evident when one tries to connect to an external IPv6 network or the Internet which is still predominantly IPv4.&lt;/p&gt;

&lt;div class="card mb-lg-3 ml-lg-3 mt-lg-3 pl-lg-5 pr-lg-5 text-center text-dark"&gt;
&lt;div class="card-body"&gt;&lt;strong&gt;Get Linux Certified. Get Ahead.&lt;/strong&gt;

&lt;p class="card-text"&gt;Hands-on &lt;a href="https://linuxcertification.co.za/lpi" rel="noopener noreferrer" target="_blank"&gt;LPI training&lt;/a&gt; and &lt;a href="https://linuxcertification.co.za/linux-foundation/courses" rel="noopener noreferrer" target="_blank"&gt;Linux Foundation courses&lt;/a&gt;&lt;/p&gt;
&lt;a class="btn btn-primary" href="https://jumpingbean.co.za/about/#contactus" target="_blank"&gt;Contact Us&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;Steps to Set Up an IPv6 LAN&lt;/h2&gt;

&lt;p&gt;This blog posts break this down into two separate problems:&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;A - Setting up an IPv6 LAN network with Linux,&lt;/li&gt;
	&lt;li&gt;B - Connecting your IPv6 network to the Internet&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you separate these two issues out its much easier to figure out what you need to do. Both of these steps have issues that need to be understood before the IPv6 "ah-hah" moment. Once that happens you will also have the "oh no" moment which might help you understand why there is such slow movement on IPv6 adoption.&lt;br&gt;
&lt;br&gt;
In the first part we will configure an Ubuntu 14.10 server to manage an IPv6 LAN. In the 2nd part we will deal with the myriad of options to connect an IPv6 network to the internet.&lt;/p&gt;

&lt;h2&gt;IPv6 Addressing - Some Theory&lt;/h2&gt;

&lt;p&gt;First we need to cover some theory on IPv6 addresses. There is a lot of article covering IPv6 addressing on the web, so I will just summarize what you need to know to proceed with setting up an IPv6 network. There are some nuances and subtleties we will brush over to provide you with a working conceptual model.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;IPv6 addresses consist of 8 groups of 16 bit hexadecimal numbers to give a total address of 128 bits. (See Global addresses below for explanation of the 2001:0db8::/32 address block.)
	&lt;ul&gt;
		&lt;li&gt;2001:0db8:85a3:0000:0000:8a2e:0370:7334&lt;/li&gt;
		&lt;li&gt;2001:db8:85a3:0:0:8a2e:370:7334 -&amp;gt; leading 0 are dropped &amp;amp; groups of zeros (0000) become 0,&lt;/li&gt;
		&lt;li&gt;2001:db8:85a3::8a2e:370:7334 -&amp;gt; lastly consecutive zeros are replaced with an empty double colon ::&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;The first 4 group of hexadecimal numbers of an address, 64 bits of the 128 bits, is the &lt;strong&gt;network prefix&lt;/strong&gt; (network mask). All IPv6 networks have a 64 bit network prefix,&lt;/li&gt;
	&lt;li&gt;The remaining 64 bits are the &lt;strong&gt;host identifier&lt;/strong&gt;,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometime you will see an addresses listed with a prefix such as /48 or /56 etc. This does not mean that 16 (64-48) or 8 (64-56) bits of the 64 network prefix has been reserved for use by hosts as with IPv4 CIDR. The network address is always 64 bits long.&lt;br&gt;
&lt;br&gt;
This notation refers to a block of networks. i.e all networks that begin with the first 48 or 56 bits set as specified. This is known as a &lt;strong&gt;routing prefix &lt;/strong&gt; and is used in routing rules, resulting in smaller routing tables. It is also used for when you are assigned a block of IPv6 networks.&lt;/p&gt;

&lt;p&gt;The idea with IPv6 is that you should be assigned a block of networks by your ISP or IANA instead of a single host address or single IPv4 network as currently happens with IPv4.&lt;/p&gt;

&lt;p&gt;The remaining bits of the &lt;strong&gt;network prefix &lt;/strong&gt;16 (64-48) or 8 (64-56) are called the &lt;strong&gt;subnet id&lt;/strong&gt;. So the &lt;strong&gt;routing prefix &lt;/strong&gt;+ &lt;strong&gt;subnet id &lt;/strong&gt;make up the&lt;strong&gt; network prefix&lt;/strong&gt; of an IPv6 address. Dont' be confused by the use of the word subnet in &lt;strong&gt;subnet id&lt;/strong&gt;.It is not an IPv4 subnet mask. It is simple the part of the network prefix you get to assign yourself as the administrator of that block of network addresses.&lt;br&gt;
So if you get an IPv6 address block with a 56 bit &lt;strong&gt;routing prefix&lt;/strong&gt; it means you can have 255 (28) networks each with 1.844674407×10¹⁹ (264) hosts!. Its up to you to determine how the subnet portion is used to create the network address. So if you are given a block of IPv6 network such as fdc8:282a:f54c::/48 it means you can have 216 networks. Your networks addresses are :&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;fdc8:282a:f54c:1::/64&lt;/li&gt;
	&lt;li&gt;fdc8:282a:f54c:2:/64&lt;/li&gt;
	&lt;li&gt;...&lt;/li&gt;
	&lt;li&gt;fdc8:282a:f54c:ffff:/64&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We wil come to the&lt;strong&gt; host identifier portion later&lt;/strong&gt; later. The IPv6 network address space has been "sliced up" into different blocks. What you need to know about these blocks is given below: (each address block is explained further later)&lt;/p&gt;
Special IPv6 Address Blocks

&lt;table border="1" cellpadding="1" cellspacing="1"&gt;
	&lt;thead&gt;
		&lt;tr&gt;
			&lt;th scope="col"&gt;Name&lt;/th&gt;
			&lt;th scope="col"&gt;Prefix&lt;/th&gt;
			&lt;th scope="col" style="width:60%"&gt;Explanation&lt;/th&gt;
		&lt;/tr&gt;
	&lt;/thead&gt;
	&lt;tbody&gt;
		&lt;tr&gt;
			&lt;td&gt;Link Local&lt;/td&gt;
			&lt;td&gt;fe80::/10&lt;/td&gt;
			&lt;td&gt;Although this routing prefix is only 10 bits leaving 54 bits for up to 254 networks only one subnet id has been allocated so far by the specification which is fe80:0:0:0 or fe80::/64&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;Unique Local Addresses(ULA)&lt;/td&gt;
			&lt;td&gt;fc00::/7&lt;/td&gt;
			&lt;td&gt;Although this routing prefix is only 7 bits, the 8th bit must always be 1 according to the spec. &lt;strong&gt;So what you will see in practice is fd00::/7&lt;/strong&gt;. At some later point we may see fc00::/7. We will be using this address block in our setup.&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td&gt;Global Addresses&lt;/td&gt;
			&lt;td&gt;2001::/23&lt;/td&gt;
			&lt;td&gt;Global addresses will in fact be most of the address space of IPv6 So far the 2001::/23 block has been assinged and this is what you are likely to see in practice until further blocks are assigned to regional registrars. Within this some addresses have been reserved for a special purpose such as 2001:0db8::/32 which is reserved for documentation so if anyone copies it it won't actually route. &lt;a href="http://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xhtml" target="_blank"&gt;To see what block have been assigned see the IANA site.&lt;/a&gt;&lt;/td&gt;
		&lt;/tr&gt;
	&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;For more information on the address blocks see &lt;a href="https://www.iana.org/assignments/ipv6-address-space/ipv6-address-space.xhtml" target="_blank"&gt;the IANA site&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;A - Set Up an IPv6 Network&lt;/h1&gt;

&lt;h2&gt;1) Set Up an Link Local Only IPv6 LAN with Linux&lt;/h2&gt;

&lt;p&gt;We will set up an IPv6 network incrementally. We will start with the simplest and most trivial IPv6 and add services as we go. This will help us arrive at a understanding of how the various services fit together. We will go from the simplest IPv6 network to one which has all the basic network services required of a business network.&lt;/p&gt;

&lt;h3&gt;Simplest IPv6 Network - Link Local Only&lt;/h3&gt;

&lt;p&gt;To setup the simplest IPv6 network you just have to boot up a host or two with a IPv6 enabled operating system such as Ubuntu. Open a terminal and type:&lt;/p&gt;

&lt;p class="text-center"&gt;"ip -6 address list"&lt;/p&gt;

&lt;p&gt;You should see output similar to the following:&lt;/p&gt;

&lt;pre&gt;        1: lo:  mtu 65536 
        inet6 ::1/128 scope host 
        valid_lft forever preferred_lft forever 
        2: eth0:  mtu 1500 
        qlen 1000 inet6 fe80::922b:34ff:fe7b:6ff1/64 scope link 
        valid_lft forever preferred_lft forever,multicast,up,lower_up&amp;gt;,up,lower_up&amp;gt;
        &lt;/pre&gt;

&lt;p&gt;IPv6 &lt;strong&gt;link local&lt;/strong&gt; addresses have been assigned automatically to any interfaces that you have. The IPv6 localhost address (IPv4 127.0.0.1) is ::1/128. You can do the same on another host to gets it IPv6 link local address and then do a IPv6 ping with "&lt;strong&gt;ping6&lt;/strong&gt;" - note the &lt;strong&gt;6.&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        ping6 fe80::922b:34ff:fe7b:6ff1&lt;/pre&gt;

&lt;p&gt;The&lt;strong&gt; fe80::/64&lt;/strong&gt; network prefix is the &lt;strong&gt;link local &lt;/strong&gt;network as explained in the table above. It should be the only IPv6 network address you will see across different physical networks. In fact every host on an IPv6 network must have an link local address (fe80::/64).&lt;/p&gt;

&lt;h3&gt;Host Identifier Generation&lt;/h3&gt;

&lt;p&gt;The host identifier portion of the link local address, the remaining 64 bits, is generated from the mac address with a algorithm applied to extend the 48 bit mac address to the 64 bit host address required for IPv6. &lt;a href="http://en.wikipedia.org/wiki/MAC_address" target="_blank"&gt;See EUI64 for the algorithm used&lt;/a&gt;. The host identifier may also be manually assigned by the system administrator. This introduces the risk of duplicate IP addresses being assigned, so IPv6 has a duplicate address detection protocol that allows hosts to determine if there is a conflict before assigning itself an address.&lt;/p&gt;

&lt;p&gt;In most cases you will let this be automatically generated. In IPv4 initially IP address had to be manually assigned or assigned via a DHCP server. Later the 169.254/16 address range was reserved for auto-configuration in IPv4 network. Unlike IPv4 your interfaces will always have an fe80::64 address, it is not used instead of a valid IPv6 address. In IPv6 your interfaces will typically have multiple IP addresses.&lt;/p&gt;

&lt;h4&gt;Why do you need a Link Local Address?&lt;/h4&gt;

&lt;p&gt;IPv6 configuration is done using layer 3 (network layer) protocols and not layer 2 (media layer eg. Ethernet) as with IPv4; so a valid IPv6 address is required before any additional configuration can be done. Of couese it also allows for zero config simple networks.&lt;/p&gt;

&lt;h4&gt;Pros and Cons of Link Local Network&lt;/h4&gt;

&lt;p&gt;With a link local address you can communicate with other IPv6 hosts on the local network segment or broadcast domain. i.e the same switch or shared media network. So for a home LAN not connected to the internet this is all that is required. You can connect to your printer, Smart TV, PlayStation etc automatically using protocols such as UPnP and multicast DNS (ZeroConf). Connecting to the internet, or a network in a different physical network or logical network, will require a bit more work.&lt;/p&gt;

&lt;h2&gt;2) Set Up A Stateless Routable IPv6 Network&lt;/h2&gt;

&lt;p&gt;If this all there was too it then we could all go home. But if you start to think about it you will begin to have some doubts as to how useful a link local only IPv6 network is.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;How do I assigned the same address to a host every time without doing it manually? (it is possible for a nodes host identifier to change between reboot if there is a conflict)&lt;/li&gt;
	&lt;li&gt;What if I change the NIC and get a different IP address?&lt;/li&gt;
	&lt;li&gt;How do I configure the hosts for routes and other services such as DNS, NTP etc?&lt;/li&gt;
	&lt;li&gt;How do I communicate between two internal networks separated by a router or WAN link?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To address these issues you need to assign yourself a non-site local address. This can be a &lt;strong&gt;unique local address (ula)&lt;/strong&gt; or a &lt;strong&gt;global address&lt;/strong&gt;. For a global address you will need to get an IPv6 network block from your ISP or get one assigned to you by IANA. So we will make use of a ULA address which you can assign yourself.&lt;/p&gt;

&lt;h3&gt;What is the difference between a ULA and a Global address?&lt;/h3&gt;

&lt;p&gt;By convention a ULA is not routed over the public internet. Routers on the public IPv6 network should refuse to route such traffic in a similar manner to private IPv4 addresses. Essentially there should be no routing entries in the routers responsible for internet traffic, making them unreachable from outside an organisation.&lt;/p&gt;

&lt;p&gt;If you are going to start experimenting with IPv6 there are two reasons to use a ULA&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;you should start with a ULA address to avoid any mis-configuration disasters.&lt;/li&gt;
	&lt;li&gt;It might be hard to get a global IPv6 address assigned to you. There are very few ISP handing out IPv6 network addresses currently so in some cases its the only choice available to you.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Unique Local Addresses&lt;/h3&gt;

&lt;p&gt;One feature of unique local addresses is that they should be different for every network you see. Unlike IPv4 where private addresses (196.128/16, 10/8 and 172.16/16) meant there are often networks with the same network mask - eg nearly every home and office has a network with the network mask of 192.168.1.0/24 address or 10.0.0.0/24 range; you might never see a duplicate IPv6 ULA network address. This is because only the first 8 bits of the network prefix are fixed at "fd". The remaining 56 bits of the netowrk prefix, the subnet id, can be randomly selected. System administrators are meant to create the subnet id themselves. A handy way to generate the subnet id for the ULA is to use a site like &lt;a href="http://unique-local-ipv6.com/" target="_blank"&gt;unique-local-ipv6.com&lt;/a&gt;. From here you will get a /48 address range meaning you can have up to 65356 private networks!&lt;/p&gt;

&lt;p&gt;Its generally a good idea to use a random subnet id rather than generate one like fd01:1:1:1::0/64 as this increase your chance of a conflict. Why would you be worried about a conflict if these are not routable? Have you ever had to merge two network that had the same IPv4 address range? Have you ever tried to setup a VPN between two network with the same IP network range?&lt;/p&gt;

&lt;h3&gt;Global Addresses&lt;/h3&gt;

&lt;p&gt;Global address will be assigned to you by an ISP unless you get your own block and tell your ISP to route it to you. So much like you get a public IP address from your ISP for IPv4 you will in future, get an IPv6 network address range when you dial up. &lt;strong&gt;Note:&lt;/strong&gt; not a single IP address but a whole block of IPv6 addresses. Depending on your ISP you may get only one network or be assigned a block with multiple networks. In this case the router will received the network address prefix to use on your network. It will work the same as for the steps below except instead of a ULA network it will be a global address. Note you don't get assigned a full IPv6 address. You get the network prefix.&lt;br&gt;
&lt;br&gt;
So to summarise. You will need at least two IPv6 addresses for each interface if you want to do normal networkng tasks like route between network. A link local which is always present and at least one ULA or global address or perhaps all three!&lt;/p&gt;

&lt;p&gt;For our exercise we will use ULA addresses to setup an IPv6 only LAN.&lt;/p&gt;

&lt;h3&gt;Set Up an ULA IPv6 Network in Linux&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;You can setup the following IPv6 network on a network that is already configured for IPv4. You can run IPv6 in parallel with IPv4. This is known as a &lt;strong&gt;dual stack&lt;/strong&gt; setup. Once done you can stop the IPv4 services and run the network on IPv6 only or keep it dual stack. One reason to test without IPv4 infrastructure running is to convince yourself that your network really is working over IPv6.&lt;/p&gt;

&lt;p&gt;Ok, now things start to get a bit more complicated. First we need some more theory :( We have already seen how a link local address is assigned. But how is the ULA address assigned? The nodes will need some way to get the network prefix? For this IPv6 makes use of a &lt;strong&gt;router advertisement service&lt;/strong&gt; that runs on the &lt;strong&gt;local network router&lt;/strong&gt;. This is what they mean when they say SLAAC (Stateless Automatic Address Configuration) configuration. The process is as follows:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;An link local address is generated by the host,&lt;/li&gt;
	&lt;li&gt;The host will ask (solicit) any routers for configuration information.&lt;/li&gt;
	&lt;li&gt;The router response with a router advertisement. This advertisement contains the &lt;strong&gt;network prefix &lt;/strong&gt;the host should use and the address of the router for the default route.&lt;/li&gt;
	&lt;li&gt;The router may also provide a DNS address&lt;/li&gt;
	&lt;li&gt;The node generates the host identifier portion of the IPv6 ULA address and assigns it to the interface. (Note: The router does not provide the entire IPv6 address)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a node now has an ULA IPv6 address and a default gateway and all should be good. This is known as stateless address assignment. The router does not assign an address per se. It has no idea what IPv6 address the hosts ends up using, only that it is in the provided network. Hence the stateless in the term stateless automatic address configuration (SLAAC).&lt;/p&gt;

&lt;h3&gt;Steps to Configure the Router Advertisement Service&lt;/h3&gt;

&lt;p&gt;The advertisement service can run on any Linux box, but that box will become the default route for IPv6 traffic. In future your ADSL router will provide router advertisement services. First assign the Linux box a static IPv6 address from the ULA network: (In the examples that follow I use the fd5f:12c9:2201::/48 ULA routing prefix and I have chosen fd5f:12c9:2201:1::/64 as the network prefix. (ie :1 is the subnet id).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure a static IPv6 on Ubuntu&lt;/strong&gt;&lt;/p&gt;
&amp;nbsp;

&lt;pre&gt;        sudo vi /etc/network/interfaces&lt;/pre&gt;
&amp;nbsp;

&lt;pre&gt;        auto eth0
        iface eth0 inet6 static
          address fd5d:12c9:2201:1::1
          netmask 64
          autoconf 0
          dad-attempts 0
          accept_ra 0
        &lt;/pre&gt;

&lt;p&gt;Now we need to install the router advertisement service:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Router Advertisement Daemon Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;sudo apt-get install radvd&lt;/p&gt;

&lt;p&gt;vi /etc/radvd.conf&lt;/p&gt;

&lt;pre&gt;        interface eth0
        {
            AdvSendAdvert on;
            prefix fd5d:12c9:2201:1::1/64 {
                AdvOnLink on;
                AdvAutonomous on;
            };
            #Send DNS Server setting - assumes there is a DNS server setup at the address below
            RDNSS fd5d:12c9:2201:1::2{
            };
        };
        &lt;/pre&gt;

&lt;p&gt;Restart the service and then on a client restart the network. You should see two IPv6 address on your network card.&lt;/p&gt;
&amp;nbsp;

&lt;pre&gt;        ip -6 address list&lt;/pre&gt;
&amp;nbsp;

&lt;p&gt;You can ping the router with the ping6 utility:&lt;/p&gt;

&lt;p&gt;"ping6 fd5d:12c9:2201:1::1" if this doesn't work try "ping6 fd5d:12c9:2201:1::1 -I eth0" -&amp;gt; Use the interface with the assigned IPv6 address. We will cover DNS and IPv6 in the net section.&lt;/p&gt;

&lt;p&gt;Congratulations you have an IPv6 network up and running! If your router is multi honed and has two interfaces with IPv6 addresses you will be able to route between the two networks. You will need to setup two static IPv6 addresses in /etc/network/interfaces.&lt;/p&gt;

&lt;h2&gt;3) Set Up a Stateful ULA IPV6 LAN with DHCP &amp;amp; DNS Services&lt;/h2&gt;

&lt;p&gt;After the initial excitement of getting a routable IPv6 network up and running you start to realise you need a few more services:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;What if I want to send down other configuration information such as the NTP or SMTP server settings?&lt;/li&gt;
	&lt;li&gt;What if I want to make sure the same IPv6 address, not just the network prefix, always get assigned to a server like the NTP or SMTP server?&lt;/li&gt;
	&lt;li&gt;What if I want to track IP address assignment?&lt;/li&gt;
	&lt;li&gt;What if I want to provide dynamic updates to the local DNS server for local hosts? After all remembering all these 128 bit random addresses is going to be hard.&lt;/li&gt;
	&lt;li&gt;How do I assing a IPv6 address in the DNS server zone file?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sadly the router advertisement service can only provide a network prefix, default route and dns server address and not much else. To provide the required service we need to use a DHCP server. To get the node to use a DHCP server we need to configure the &lt;strong&gt;radvd service&lt;/strong&gt; to tell nodes to contact a DHCP server for additional configuration information.&lt;/p&gt;

&lt;p&gt;You can configure radvd to tell the nodes to contact the DHCP server for&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;configuration info only (stateless) or&lt;/li&gt;
	&lt;li&gt;for configuration information and its IP address (stateful)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So a DHCP server can be used in a stateless and stateful IPv6 setup. We will use DHCP to send configuration information such as DNS servers and to assign IP addresses since we want to assign fixed IP to well known hosts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edit the /etc/radvd.conf file&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        interface eth0
        {
             AdvSendAdvert on;
             prefix fd5d:12c9:2201:1::1/64 {
                AdvOnLink on;
                AdvAutonomous on;
                AdvManagedFlag on; # get a full IP address from the DHCP server
                AdvOtherConfigFlag on; # get other configuration info from the DHCP server
            };
        };
        &lt;/pre&gt;

&lt;p&gt;Setting up DHCP6 is similar to DHCP for IPv4. We will use the isc-dhcpd-server:&lt;/p&gt;

&lt;p class="text-center"&gt;"sudo apt-get install isc-dhcp-server"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edit the /etc/dhcpd/dhcpd6.conf: (note comments removed for clarity)&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        ddns-update-style interim;
        ddns-updates on;
        
        update-conflict-detection false;
        update-optimization false;
        
        option domain-name "jumpingbean.co.za";
        option dhcp6.name-servers fd5d:12c9:2201:1::2;
        
        default-lease-time 600;
        max-lease-time 7200;
        include "/etc/dhcp/rndc.key";
        
        log-facility local7;
        
        zone jumpingbean.co.za. {
                                  primary 127.0.0.1;
                                  key rndc-key;
        }
        
        
        zone 1.0.0.0.1.0.2.2.c.9.2.1.d.5.d.f {
                primary 127.0.0.1;
                key rndc-key;
        }
        
        subnet6 fd5d:12c9:2201:1::/64 {
             range6 fd5d:12c9:2201:1::100 fd5d:12c9:2201:1::200;
        }
        &lt;/pre&gt;

&lt;p&gt;A lot of the configuration for DHCPv6 is the same as DHCPv4 and I am assuming you are familiar with the configuraiton of DHCPv4 for dynamic DNS updates. Here we set up the DHCP server to&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;provide the DNS server&lt;/li&gt;
	&lt;li&gt;dynamically update the bind DNS server by specifying which Zone file should be updated&lt;/li&gt;
	&lt;li&gt;the domain name to use on nodes.&lt;/li&gt;
	&lt;li&gt;we could push additional routes, NTP server settings etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: To setup a fixed IPv6 address in DHCPv6 you make use of a DUID (Device Unique ID) which is not the mac address which is used for IPv4 DHCP. The DUID is assigned by the operating system and remains the same even if network cards change. The complete unique id is made up of the DUID and an IAID (interface assigned id) as you may have more than one interface on a node requesting IP addresses from the DHCP server. I.E Each interface has a unique ID.&lt;/p&gt;

&lt;p&gt;Below is an example of a fixed assignment&lt;/p&gt;

&lt;pre&gt;        host example {
          host-identifier option dhcp6.client-id 31:30:30:30:30:31:33;
          fixed-address6 fd5d:12c9:2201:1::101;
        }
        &lt;/pre&gt;

&lt;p&gt;I am not aware of a way to get the DUID and IAD of a interface on a node in Linux. You can look in the leases file on the DHCP server once an address is assigned but this is no ideal. A binary copy of the node's DUID can be found at /var/lib/dhcpv6/dhcp6s_duid but it cannot be simple cat'ed to stdout. If anyone knows how to read the unique id for an interface on a node please let the internet know :)&lt;/p&gt;

&lt;h4&gt;DHCP Complexities&lt;/h4&gt;

&lt;p&gt;Before moving on to configuring the name server there are some issues to note about the DHCP server.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;You can run an IPv4 and IPv6 DHCP server in parallel on the same network as they listen on different ports. So you can have a dual stack DHCP IPv6 and IPv4 network but it does require two running instances of the DHCP server.&lt;/li&gt;
	&lt;li&gt;To start the isc-dhcp-server in IPv6 mode you need to provide it with the option "-6". You can set this in /etc/defaults/isc-dhcpd-server via "OPTIONS="-6" on Ubuntu. BUT! on Ubuntu 14.10 this is ignored and the init v script will stubbornly start in IPv4 mode. To get the dhcp server to start in DHCPv6 mode add this to the /etc/rc.local file as a temporary solution and disable the init system from starting up the DHCP server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p class="text-center"&gt;sudo update-rc.d dhcpd disable&lt;/p&gt;

&lt;p class="text-center"&gt;dhcpd -6 -cf /etc/dhcp/dhcpd6.conf -lf /var/lib/dhcp/dhcpd6.leases eth0&lt;/p&gt;

&lt;p&gt;If you going to run dual stack you will nee to add this for the IPv4 mode of the DHCP server&lt;/p&gt;

&lt;p class="text-center"&gt;dhcpd -4 -cf /etc/dhcp/dhcpd4.conf -lf /var/lib/dhcp/dhcpd4.leases eth0&lt;/p&gt;

&lt;p&gt;You might also have apparmour prevent writing to the lease file if you try and write it to a different location. You can either stop apparmor or configure the dhcp server to write the lease file to location that its profile supports writing to.&lt;/p&gt;

&lt;h4&gt;DNS Set Up Bind9&lt;/h4&gt;

&lt;p&gt;​First you will need to install bind9. DNS differs from DHCP in that only one instance of the DNS server needs to be running to service both IPv4 and IPv6 requests. It simple need to bind to both IPv4 and IPv6 addresses. A DNS server will provide answers to queries for an IPv4 or IPv6 address no matter what the source address of the query. i.e it doesn't look at whether the query came from an IPv4 or IPv6 address it just looks at what it was asked to lookup - the record type.&lt;/p&gt;

&lt;h4&gt;sudo apt-get install bind9&lt;/h4&gt;

&lt;p&gt;​Next we need to edit the bind configuration to setup our zones and tell bind to listen on IPv6 or IPv6 and IPv4 interfaces. Most of this is standard bind9 setup as for IPv4. The only real change at this point is the AAAA records in the zone file and the awful reverse zone name that is a mission to type without making a mistake.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
&lt;strong&gt;/etc/bind/named.conf.options&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        options {
                  directory "/var/cache/bind";
                  forwarders {
                                8.8.8.8;
                  };
                  dnssec-validation auto;
        
                  auth-nxdomain no; # conform to RFC1035
                  listen-on-v6 { any; };
        };
        &lt;/pre&gt;

&lt;p&gt;&lt;br&gt;
&lt;strong&gt;/etc/named/named.conf.local&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        zone "jumpingbean.co.za" {
                type master;
                allow-update { key rndc-key; };
                file "/var/lib/bind/jumpingbean.co.za";
        };
        
        zone "1.0.0.0.1.0.2.2.9.c.2.1.d.5.d.f.ip6.arpa" {
                type master;
                file "/var/lib/bind/fd5d:129c:2201:1";
                allow-update { key rndc-key; };
        };
        &lt;/pre&gt;

&lt;p&gt;The zone files below already contain some dynamically updated IPv6 address from the DHCP server. Those are the ones annotated with the TXT entries. You will only make entries for the static IPv6 addresses and let DHCP handle the dynamic address entries.&lt;/p&gt;

&lt;p&gt;Zone files&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;/var/lib/bind/jumpingbean.co.za&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        $ORIGIN .
        
        $TTL 604800     ; 1 week
        jumpingbean.co.za           IN SOA  ns.jumpingbean.co.za. no.jumpingbean.co.za. (
                                        182        ; serial
                                        604800     ; refresh (1 week)
                                        86400      ; retry (1 day)
                                        2419200    ; expire (4 weeks)
                                        604800     ; minimum (1 week)
                                        )
                                NS      ns.jumpingbean.co.za.
                                A       127.0.0.1
                                AAAA    ::1
        $TTL 300        ; 5 minutes
        android-a74e95670198fd6a A      10.0.10.4
                                TXT     "0002ec64161ce51591018b9eb0a01ae6b9"
        $TTL 604800     ; 1 week
        gateway                 AAAA    fd5d:12c9:2201:1::2
        ns                      AAAA    fd5d:12c9:2201:1::2
        $TTL 300        ; 5 minutes
        trinity                 A       10.0.10.3
        $TTL 187        ; 3 minutes 7 seconds
            
                            TXT     "025c83d7b0b5ca62d26381f057fbeed483"
        &lt;/pre&gt;

&lt;p&gt;&lt;br&gt;
&lt;strong&gt;/var/lib/bind/fd5d:129c:2201:1&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        $ORIGIN .
        $TTL 604800     ; 1 week
        jumpingbean.co.za           IN SOA  jumpingbean.co.za. no.jumpingbean.co.za. (
                                        182        ; serial
                                        604800     ; refresh (1 week)
                                        86400      ; retry (1 day)
                                        2419200    ; expire (4 weeks)
                                        604800     ; minimum (1 week)
                                        )
                                NS      ns.jumpingbean.co.za.
                                A       127.0.0.1
                                AAAA    ::1
        $ORIGIN jumpingbean.co.za.
        $TTL 300        ; 5 minutes
        android-a74e95670198fd6a A      10.0.10.4
                                TXT     "0002ec64161ce51591018b9eb0a01ae6b9"
        $TTL 604800     ; 1 week
        gateway                 AAAA    fd5d:12c9:2201:1::2
        ns                      AAAA    fd5d:12c9:2201:1::2
        $TTL 300        ; 5 minutes
        trinity                 A       10.0.10.3
        $TTL 187        ; 3 minutes 7 seconds
                                TXT     "025c83d7b0b5ca62d26381f057fbeed483"
        &lt;/pre&gt;

&lt;p&gt;Now we have a fully functional IPv6 LAN. You may have IPv4 running at the same time and a interface can have a IPv4 and IPv6 address. To test DNS try the following (you can try it on public DNS servers too)&lt;/p&gt;

&lt;p class="text-center"&gt;dig @DNS-IP www.google.com A&lt;/p&gt;

&lt;p class="text-center"&gt;dig @DNS-IP www.google.com AAAA&lt;/p&gt;

&lt;p&gt;You can query the local DNS server you just setup as well for an address you assigned.&lt;/p&gt;

&lt;p&gt;So now you have a fully functioning IPv6 network. You can disable the IPv4 network and see that you can still reach all machines and services. On the client side you will not have to do any explicit configuration changes. Everything will be fine until you try and access the internet. The following will fail:&lt;/p&gt;

&lt;p class="text-center"&gt;"ping6 www.google.com"&lt;/p&gt;

&lt;p class="text-center"&gt;"ping www.google.com"&lt;/p&gt;

&lt;p&gt;The first ping to google IPv6 address will fail unless you have global address. If you have an fd00::/7 address you won't get a response. The second will fail because you are trying to access an IPv4 host from an IPv6 network. And this leads to the second issue we need to deal with to understand IPv6.&lt;/p&gt;

&lt;h1&gt;B: Connecting to the Internet from an IPv6 Network&lt;/h1&gt;

&lt;p&gt;This is the most troublesome part of the IPv6 protocol. There are a myriad of "transition mechanism" designed to ease the transition form IPv4 to IPv6. None of them are ideal as far as I can tell and most have requirements which place them outside of the reach of small and medium businesses until IPv6 becomes more widespread.&lt;/p&gt;

&lt;p&gt;At the time of writing this article the most common occurrence is for an ISP to assign a single IPv4 address to your router. As IPv6 becomes more widely adopted this will change and the step below may not be that relevant. The issue is that IPv6 services cannot be access via an IPv4 address and IPv4 services cannot be natively connected from an IPv6 only host. We are going to look at two solutions:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;NAT64,&lt;/li&gt;
	&lt;li&gt;Using a tunnel broker&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;NAT64 Transition Mechanism&lt;/h3&gt;

&lt;p&gt;For the sake of simplicity we will assume an IPv4 address on the router. The advantage of NAT64 is that you can use IPv6 internally and still use your ISP public IPv4 address. Another advatnage is you can keep using your IPTables IPv4 firewall rules. The disadvantage is that you will only need to connect to IPv4 hosts on the internet. So it kind of defeats the purpose of setting up an IPv6 network other than to experiment with its features.&lt;/p&gt;

&lt;p&gt;NAT64 protocol nats all outgoing IPv6 addresses to a pool of IPv4 addresses and then routes the request to the ISP. This means there will be double natting. Once for NAT64 and then again for your router to the ISP. To setup NAT64 install tayga.&lt;/p&gt;

&lt;p class="text-center"&gt;"sudo apt-get install tagya"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;edit /etc/tagya.conf&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        tun-device nat64 # the interface to create to perform the natting
        ipv4-addr 192.168.255.1 # the ipv4 address for the tayg interface from the pool below.
        ipv6-addr fd5d:12c9:2201:1::3 # the IPv6 address to assign to the tun-device nat64 above
        prefix fd5d:12c9:2201:1:1:1::/96 # see explanation below
        dynamic-pool 192.168.255.0/24 # the pool of IPv4 addresses to assign to each IPv6 address
        data-dir /var/spool/tayga
        &lt;/pre&gt;

&lt;p&gt;So NAT64 creates a interface to listen on. The interface gets an IPv6 and IPv4 address. Any request from an IPv6 source for an IPv4 address will get routed to the nat64 interface. To do this we need to add a route for IPv6 addresses:&lt;/p&gt;

&lt;p class="text-center"&gt;"sudo ip route add fd5d:12c9:2201:1:1:1::/96 dev nat64"&lt;/p&gt;

&lt;p&gt;All that remains is to explain where the network prefix d5d:12c9:2201:1:1:1::/96 used above and in tagya.conf file come from. This is a network from your ULA block which you set aside for IPv4 to IPv6 address translation. You need to configure the bind DNS server to automatically convert any IPv4 addresses to IPv6 using the prefix above. This way you won't get an error when you try and ping an IPv4 host from an IPv6 address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;/etc/bind/named.conf.options file and add the items in bold below:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;        options {
                directory "/var/cache/bind";
        
                 forwarders {
                        8.8.8.8;
                 };
        
                dns64 fd5d:12c9:2201:1:1:1::/96 {
                        clients {
                                any;
                        };
        
                        exclude {
                                any;
                        };
                };
        
                dnssec-validation auto;
        
                auth-nxdomain no;    # conform to RFC1035
                listen-on-v6 { any; };
        
        };
        &lt;/pre&gt;

&lt;p&gt;At this point your IPv6 network can live happily in a sea of public IPv4 addresses. Unfortunately you won't be able to access an IPv6 host with a global address just yet :(&lt;/p&gt;

&lt;h3&gt;Using Tunnel Brokers&lt;/h3&gt;

&lt;p&gt;Alternatively you can keep a dual stack and then setup a IPv6 tunnel at one of the well known tunnel brokers such as Hurricane Electric or setup a Teredo tunnel. Both have draw backs. The &lt;a href="https://tunnelbroker.net/" target="_blank"&gt;Hurricane Electric&lt;/a&gt; service requires you to setup a SIT tunnel to one of their points-of-presence. This requires a static IPv4 address or you will need to constantly update the configuration on their site when your ip changes. A Teredo tunnel requires a IPv6 address.&lt;/p&gt;

&lt;p&gt;The advantages of a SIT tunnel is that you get a routable global IPv6 address at the POP and you can access IPv6 nodes natively. The disadvantage is your traffic needs to be proxied or tunneled over the IPv4 internet increasing latency. If you get a SIT tunnel you only have one global IP address for the host the tunnel is created on so you will need to setup a tunnel per node or have an IPv4 network and nat it through the SIT tunnel.&lt;/p&gt;

&lt;h2&gt;Risk of Using IPv6&lt;/h2&gt;

&lt;p&gt;​One point to bear in mind when using IPv6 is that it will bypass all your IPv4 security measures. If you have an IPTables ruleset for IPv4 you will need to re-implement that for IPv6 with &lt;strong&gt;IP6tables&lt;/strong&gt;. There is no natting in IP6tables currently. One thing to remember, if you in a co-location data centre and geting IPv6 auto-generated link-local addresses, any&lt;strong&gt; machine in the same centre can connect to your server via IPv6, potentially by passing any IPv4 ruleset you have setup&lt;/strong&gt;! Some data-cenre might be assigning you global ipv6 addresses without you knowing! So check your co-location servers and make sure you don't have any back-doors due to mis-configuration. At least one data centre we have server at had helpfully given us an gobal IPv6.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The transition mechanisms are a mess. One just has to look at the acronym soup on &lt;a href="https://en.wikipedia.org/wiki/IPv6_transition_mechanisms" target="_blank"&gt;Wikipedia transition mechanism page&lt;/a&gt; dealing with the myriad of ways to inter operate between IPv4 and IPv6. We have only tried two! We are still experimenting with the tunneling mechanisms above to see if they work properly as they promise the most benefits, being able to reach servers in different locations over the public internet with routable IPv6 addresses. So far we have met with limited success. No doubt we got to learn a bit more. But this also points to the biggest reason IPv6 is not being adopted. There is no real way for the two protocols to inter operate. A painful time will come when 50% of the world is on IPv6 and the rest on IPv4.&lt;/p&gt;

&lt;p&gt;At least you can get some experience setting up internal IPv6 networks in the meantime.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-21T04:49:00Z</dc:date>
  </entry>
  <entry>
    <title>Email Authentication Records: SPF, DKIM, and DMARC</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14188057" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14188057</id>
    <updated>2025-04-13T17:07:27Z</updated>
    <published>2025-04-13T10:35:00Z</published>
    <summary type="html">&lt;div class="container my-4"&gt;
&lt;p&gt;Sophisticated, at least from a social engineering point of view, syndicates operate in South Africa bombarding organisations with fake government tender or courier emails, often sending multiple scams daily. This two-part series explores how to protect your organisation from these, and other fraudulent emails.&lt;/p&gt;

&lt;p&gt;In Part 1, we focus on email authentication protocols—SPF, DKIM, and DMARC—and reveal their strengths and weaknesses. Part 2 will cover non-authentication indicators of scams, such as suspicious links and behavioural patterns as well as some disturbing indication of compromised government servers or their users.&lt;/p&gt;

&lt;div class="card mb-lg-3 ml-lg-3 mt-lg-3 pl-lg-5 pr-lg-5 text-center text-dark"&gt;
&lt;div class="card-body"&gt;&lt;a href="https://cybersecurityservices.tech/" rel="noopener" target="_blank"&gt;&lt;strong&gt;Cybersecurity Consulting&lt;/strong&gt;&lt;/a&gt;

&lt;p class="card-text"&gt;&lt;a href="https://cybersecurityservices.tech/services" rel="noopener" target="_blank"&gt;Expert services to protect your business.&lt;/a&gt;&lt;br&gt;
&lt;br&gt;
&lt;a href="https://cybersecuritytraining.tech/#careers" rel="noopener" target="_blank"&gt;&lt;strong&gt;Cybersecurity Training&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cybersecuritytraining.tech/w/eccouncil/certified-ethical-hacker-ceh-training" rel="noopener" target="_blank"&gt;CEH&lt;/a&gt; &amp;amp; D&lt;a href="https://cybersecuritytraining.tech/w/eccouncil/certified-devsecops-engineer" rel="noopener" target="_blank"&gt;DevSecOps&lt;/a&gt; certifications and more.&lt;/p&gt;
&lt;a class="btn btn-primary" href="https://jumpingbean.co.za/about/#contactus" target="_blank"&gt;Contact Us&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;SPF: Sender Policy Framework&lt;/h2&gt;

&lt;div class="card p-4"&gt;
&lt;div class="card-body"&gt;
&lt;p class="mb-3" itemprop="articleBody"&gt;SPF is a DNS &lt;code&gt;TXT&lt;/code&gt; record listing authorised email servers for a domain. When an email arrives from &lt;code&gt;user@example.com&lt;/code&gt;, the receiving server checks the SPF record against the sending server’s IP, using the &lt;code&gt;MAIL FROM&lt;/code&gt; or &lt;code&gt;Return-Path&lt;/code&gt; address—not the "From" address shown in email clients.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;Senders can configure SPF to reject unauthorised emails or flag them as a soft fail. Without SPF, servers may trust the sender or use greylisting, increasing spam risks.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;To check a domain’s SPF record, use:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;dig -t TXT google.com&lt;/code&gt;&lt;/div&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;Look for a record starting with &lt;code&gt;v=spf1&lt;/code&gt;, like:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;v=spf1 include:_spf.google.com ~all&lt;/code&gt;&lt;/div&gt;

&lt;div class="alert alert-warning" role="alert"&gt;&lt;strong&gt;Note:&lt;/strong&gt; SPF doesn’t verify the displayed "From" address, which spammers can spoof.&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;DKIM: DomainKeys Identified Mail&lt;/h2&gt;

&lt;div class="card p-4"&gt;
&lt;div class="card-body"&gt;
&lt;p class="mb-3" itemprop="articleBody"&gt;DKIM ensures email integrity using public/private key pairs. The sender’s public key resides in a DNS &lt;code&gt;TXT&lt;/code&gt; record. The sending server signs headers (e.g., &lt;code&gt;To&lt;/code&gt;, &lt;code&gt;Subject&lt;/code&gt;, &lt;code&gt;Date&lt;/code&gt;) and the email body, embedding a hash in the &lt;code&gt;DKIM-Signature&lt;/code&gt; header.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;The &lt;code&gt;bh=&lt;/code&gt; tag confirms the body’s integrity. While the &lt;code&gt;From&lt;/code&gt; header is often signed, DKIM doesn’t validate its authenticity.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;Here’s an example DKIM signature from Google:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;
&lt;pre&gt;&lt;code&gt;DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20230601; t=1744370196; x=1744974996; darn=jumpingbean.co.za;
        h=to:from:subject:message-id:list-id:feedback-id:precedence
         :list-unsubscribe:reply-to:date:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UcP3oq8NmmBzZQi2XhAhYnWWRmOQ4WATXuFEpb6k+ww=;
        b=TxAXhYUdqgP0RFnLjMMPj9Hr8C2JRyKFrBypBtNqln+i/B3WRx+f/AGlUxxlNEuLNZ...&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;To retrieve the public key, query the domain (&lt;code&gt;d=google.com&lt;/code&gt;) and selector (&lt;code&gt;s=20230601&lt;/code&gt;):&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;dig 20230601._domainkey.google.com TXT&lt;/code&gt;&lt;/div&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;This returns:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA4zd3nfUoLHWFbfoPZzAb8bvjsFIIFsNypweLuPe4M+vAP1YxObFxRnpvLYz7Z+bORKLber5aGmgFF9iaufsH1z0..."&lt;/code&gt;&lt;/div&gt;

&lt;div class="alert alert-warning" role="alert"&gt;&lt;strong&gt;Note:&lt;/strong&gt; DKIM allows spoofed "From" addresses to pass if the signed headers and body are intact.&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;DMARC: Domain-based Message Authentication&lt;/h2&gt;

&lt;div class="card p-4"&gt;
&lt;div class="card-body"&gt;
&lt;p class="mb-3" itemprop="articleBody"&gt;DMARC aligns SPF and DKIM results with the "From" domain. For example, an email claiming to be from &lt;code&gt;example.com&lt;/code&gt; must have &lt;code&gt;MAIL FROM&lt;/code&gt; (SPF) and DKIM domains matching &lt;code&gt;example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;A strict DMARC policy (&lt;code&gt;p=reject&lt;/code&gt;) blocks misaligned emails, even if SPF or DKIM pass for another domain. DMARC also provides reports to monitor authentication issues.&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;Check a DMARC policy with:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;dig _dmarc.google.com TXT&lt;/code&gt;&lt;/div&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;Example response:&lt;/p&gt;

&lt;div class="bg-light mb-3 p-3"&gt;&lt;code&gt;"v=DMARC1; p=reject; rua=mailto:mailauth-reports@google.com"&lt;/code&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;ARC: Authenticated Received Chain&lt;/h2&gt;

&lt;div class="card p-4"&gt;
&lt;div class="card-body"&gt;
&lt;p class="mb-3" itemprop="articleBody"&gt;ARC tracks SPF, DKIM, and DMARC results for emails passing through intermediaries like mailing lists, preserving their authenticity. We won’t cover ARC in detail here.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;Limitations of Email Authentication&lt;/h2&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;SPF, DKIM, and DMARC reduce email spoofing, but they rely on proper configuration. Missing DMARC records or misconfigured DNS entries allow spammers to fake the "From" address, bypassing SPF and DKIM unless DMARC enforces alignment.&lt;/p&gt;

&lt;h2&gt;Case Studies: South Africa Scam Emails&lt;/h2&gt;

&lt;div class="alert alert-info mb-4" role="alert"&gt;Examples use sanitized headers, but authentication details are accurate.&lt;/div&gt;

&lt;h3&gt;Spoofed Fastway.co.za Phishing Email&lt;/h3&gt;

&lt;div class="card p-4"&gt;
&lt;pre&gt;&lt;code&gt;
X-Mozilla-Status: 0001
X-Mozilla-Status2: 00000000
Return-Path: &amp;lt;imodzeb@vps21150.dreamhostps.com&amp;gt;
Delivered-To: mark@recipient-domain.com
Received: from mailserver1.recipient-domain.com (localhost [127.0.0.1])
    by mailserver1.recipient-domain.com (Postfix) with ESMTP id CFED3B829E3
    for &amp;lt;mark@recipient-domain.com&amp;gt;; Sun, 13 Apr 2025 07:45:41 +0000 (UTC)
Received: from webserver1.sender-hosting.com [176.31.78.120]
    by mailserver1.recipient-domain.com with POP3 (fetchmail-6.4.38)
    for &amp;lt;mark@recipient-domain.com&amp;gt; (single-drop); Sun, 13 Apr 2025 07:45:41 +0000 (UTC)
Received: from vps21150.dreamhostps.com (vps21150.dreamhostps.com [69.163.197.224])
    by webserver1.sender-hosting.com (Postfix) with ESMTPS id EBE7A16E02CA
    for &amp;lt;mark@recipient-domain.com&amp;gt;; Sun, 13 Apr 2025 08:45:06 +0100 (BST)
Authentication-Results: webserver1.sender-hosting.com;
    dkim=none;
    spf=pass (webserver1.sender-hosting.com: domain of imodzeb@vps21150.dreamhostps.com designates 69.163.197.224 as permitted sender) smtp.mailfrom=imodzeb@vps21150.dreamhostps.com;
    dmarc=none
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
    d=signing-domain.com; s=mail; t=1744530307;
    h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
     to:to:cc:content-type:content-type;
    bh=kr/tXZjETnxLC9CK82Dp+VNXBQQgOR10CK28LSxzjNA=;
    b=tIWZANiYUPoYUsp96EsByFCxEtwQ5bP9Gm0aUe086x5OdDygpCaO/lT/v7UEyE6LfOCtsc
    6UMTlOxtn63xGyf5zo6BGx9keD9LzdvAgGtpCPwYgUjQ6R7E9Q5cOA0hbIQ4i7doJJED2J
    Wg3FinBm+0z2YerOn7/K9kOh/rAclzQ=
ARC-Authentication-Results: i=1;
    webserver1.sender-hosting.com;
    dkim=none;
    spf=pass (webserver1.sender-hosting.com: domain of imodzeb@vps21150.dreamhostps.com designates 69.163.197.224 as permitted sender) smtp.mailfrom=imodzeb@vps21150.dreamhostps.com;
    dmarc=none
ARC-Seal: i=1; s=mail; d=signing-domain.com; t=1744530307; a=rsa-sha256;
    cv=none;
    b=H2Q6K2fIhxLyjvHXi52wFP31veFojhXRvspFrKM/qWSCmhSL9Q1TfWQ7BXFtoRKPbUrIUr
    lU1PRxaugc744ocBPLzwWNhFN5MECENbuc2AQx88+MR2zhT+nzMzvDOfTYVeiGq4vtSPnS
    6yOk4xXo8RKxWLSMb8Vwquujwl7GxSc=
Received: by vps21150.dreamhostps.com (Postfix, from userid 6739095)
    id 4Zb1nS3r7FzRN7g8m; Sun, 13 Apr 2025 00:11:56 -0700 (PDT)
To: mark@recipient-domain.com
Subject: Your Shipment is on Hold
X-PHP-Originating-Script: 6739095:send.php
From: Fastway couriers &amp;lt;webmail@fastway.co.za&amp;gt;
Content-Type: text/html; charset=UTF-8
Message-Id: &amp;lt;4Zb1nS3r7FzRN7g8m@vps21150.dreamhostps.com&amp;gt;
Date: Sun, 13 Apr 2025 00:11:56 -0700 (PDT)
X-Spam-Status: Yes, score=11.66
X-Spam-Level: ***********
X-Spamd-Bar: +++++++++++
X-Spam: Yes&lt;/code&gt;&lt;/pre&gt;
&lt;/div&gt;

&lt;h3&gt;Analysis&lt;/h3&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;This email claimed to be from&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;From: Fastway couriers &amp;lt;webmail@fastway.co.za&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;It passed the SPF check against&amp;nbsp;&lt;code&gt;domain of imodzeb@vps21150.dreamhostps.com&lt;/code&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Authentication-Results: webserver1.sender-hosting.com;
    dkim=none;
    spf=pass (webserver1.sender-hosting.com: domain of imodzeb@vps21150.dreamhostps.com designates 69.163.197.224 as permitted sender) smtp.mailfrom=imodzeb@vps21150.dreamhostps.com;
    dmarc=none&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Why did the email server validate it against&amp;nbsp;&lt;code&gt;domain of imodzeb@vps21150.dreamhostps.com&lt;/code&gt;? Because of the Return-Path header:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Return-Path: &amp;lt;imodzeb@vps21150.dreamhostps.com&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;No DKIM signature was presented by &lt;code&gt;vps21150.dreamhostps.com&lt;/code&gt; so it could't be checked for integrity so maybe someone altered the email text along the way. Someone like another, more sophisticated spammer I suppose.&amp;nbsp; Lastly since &lt;code&gt;fasttway.co.za&lt;/code&gt; lacks a DMARC policy record it couldn't be used to detected the spoofed mail.&lt;/p&gt;

&lt;h4&gt;Dreamhostps SPF&lt;/h4&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;We can check &lt;code&gt;vps21150.dreamhostps.com &lt;/code&gt;SPF record with:&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;dig -t TXT vps21150.dreamhostps.com&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;This results in a record which tell us to check &lt;code&gt;netblocks.dreamhost.com&lt;/code&gt;'s SPF record:&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;vps21150.dreamhostps.com. 300&amp;nbsp;&amp;nbsp; &amp;nbsp;IN&amp;nbsp;&amp;nbsp; &amp;nbsp;TXT&amp;nbsp;&amp;nbsp; &amp;nbsp;"v=spf1 mx include:netblocks.dreamhost.com -all"&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;You can check&amp;nbsp;&lt;code&gt;netblocks.dreamhost.com&lt;/code&gt; records with:&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;dig -t TXT netblocks.dreamhostps.com&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;Fastways SPF &amp;amp; DMARC&lt;/h4&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;We can check &lt;code&gt;fastway.co.za&amp;nbsp;&lt;/code&gt;SPF record with:&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;dig -t TXT fastway.co.za&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;This results in :&lt;br&gt;
&lt;br&gt;
&lt;code&gt;fastway.co.za.&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;300&amp;nbsp;&amp;nbsp; &amp;nbsp;IN&amp;nbsp;&amp;nbsp; &amp;nbsp;TXT&amp;nbsp;&amp;nbsp; &amp;nbsp;"v=spf1 mx ip4:103.61.69.0/24 ip4:101.0.80.178/32 ip4:101.0.80.179/32 ip4:101.0.80.180/32 ip4:101.0.80.181/32 ip4:192.168.32.165/24 include:spf.protection.outlook.com -all"&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;But as explained this was never looked up. We can check fastway.co.za doesn't have a DMARC record with:&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;&lt;code&gt;dig -t TXT _dmarc.fastway.co.za&lt;/code&gt;&lt;/p&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;This results in an empty response. So DMARC is not configured at the time of writing.&lt;/p&gt;

&lt;h4&gt;Not Mail Authentication Indicator of Phishing&lt;/h4&gt;

&lt;p itemprop="articleBody"&gt;The email body included a suspicious link (&lt;code&gt;https://swingzbegonacervera.com/test&lt;/code&gt;), further indicating spam. I might do another article on this at some future date. The syndicates operating in South Africa has a particular set of techniques and tactics that they follow that can be used to identify phishing emails.&lt;/p&gt;

&lt;h3&gt;Cybersquatting: dsd-govtenders.online&lt;/h3&gt;

&lt;div class="card p-4"&gt;
&lt;pre&gt;-Mozilla-Status: 0001
X-Mozilla-Status2: 00000000
Return-Path: &amp;lt;zikhona.sodika@dsd-govtenders.online&amp;gt;
Delivered-To: mark@sanitised-domain-one.com
Received: from sanitised-domain-two.com (localhost [127.0.0.1])
    by sanitised-domain-two.com (Postfix) with ESMTP id 2562EB82933
    for &amp;lt;mark@sanitised-domain-one.com&amp;gt;; Fri, 11 Apr 2025 14:25:14 +0000 (UTC)
Received: from sanitised-domain-three.com [176.31.78.120]
    by sanitised-domain-two.com with POP3 (fetchmail-6.4.38)
    for &amp;lt;mark@sanitised-domain-one.com&amp;gt; (single-drop); Fri, 11 Apr 2025 14:25:14 +0000 (UTC)
Received: from JN3P275CU003.outbound.protection.outlook.com (mail-southafricanorthazon11021136.outbound.protection.outlook.com [40.107.141.136])
    by sanitised-domain-three.com (Postfix) with ESMTPS id DFEE016E00AD
    for &amp;lt;mark@sanitised-domain-one.com&amp;gt;; Fri, 11 Apr 2025 15:24:18 +0100 (BST)
Authentication-Results: sanitised-domain-three.com;
    dkim=none;
    arc=pass ("microsoft.com:s=arcselector10001:i=1");
    dmarc=none;
    spf=pass (sanitised-domain-three.com: domain of zikhona.sodika@dsd-govtenders.online designates 40.107.141.136 as permitted sender) smtp.mailfrom=zikhona.sodika@dsd-govtenders.online
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed;
    d=sanitised-domain-one.com; s=mail; t=1744381460;
    h=from:from:reply-to:subject:subject:date:date:message-id:message-id:to:
     cc:mime-version:mime-version:content-type:content-type;
    bh=yoYEgrf3l9z7INabfXJ2vAybE/abtR7hsVhzvUAXyhU=;
    b=sPHUeukaI4QDAbQj7SrrEwXy9KZqOEA7oX62U+R8DbHY2oK13zb4UgRsNXV7V0PpVtL/ZR
    dzpyMoqmZZcp3AwATLLQPnn8VjIfFqhb0x4Geb5u8/zdoQjsPx1euvL4DKJh0GBN7VkKDW
    FYqe397t2QJg4vnc+Lpk4FlkJ4G1ue4=
ARC-Authentication-Results: i=2;
    sanitised-domain-three.com;
    dkim=none;
    arc=pass ("microsoft.com:s=arcselector10001:i=1");
    dmarc=none;
    spf=pass (sanitised-domain-three.com: domain of zikhona.sodika@dsd-govtenders.online designates 40.107.141.136 as permitted sender) smtp.mailfrom=zikhona.sodika@dsd-govtenders.online
ARC-Seal: i=2; s=mail; d=sanitised-domain-one.com; t=1744381460; a=rsa-sha256;
    cv=pass;
    b=dPxdGRXIPqX9DRI9BtVDjm8Ls786WtX/jpIVw2w1diHOJqkDRhx8L/C19SN1VOvF4UQ75p
    wnPo3TKJO0Vb7RaxCil9K9ATA1vvck9jcfD0ZtRIX9vLu1SIjcvdjz1RYsVCGBFUjXY9j+
    C9SOi8u9iH7OSjQKVtEPH3S0jiaKMos=
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=HyyhvFKT/a2UUfqs+55LVKYjvoKN7+9RGfT45R3SJwHb0PMCVjw0WyuOhTDrxpCSV1vNJV2b9/jW0xAA5lsy1PUlJdL9Rmv4sQNlCgRRKQNuGSTmfLGV9VPdY3WWGTZFD9bt3wEWmhEdQhy6lou4Q3POUsbOxyGrdeFB/iI5D8DeA8acE7Tnhx/St4df75nT8pbUv5nnY4GAyEwPCoyk/DtD77frDlLEzsP63FJpvzEvSwvXU1OAaJRY4D7AuA8H99SM2RVRtx77q5rUUumVp7aTFi2v82afkzTcTNMfdTe6HBjoEoYpWyF6oECEiiCvRSMH/mvFKH6V2opBqcwwMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yoYEgrf3l9z7INabfXJ2vAybE/abtR7hsVhzvUAXyhU=;
 b=MTZfy3Ru/T0IUPJjyIJhKg/kDWn/5e4DZkIksEGOuNXAMlPsClgA8kKnsEUc6NvJba3++MhPcxzVjvT56PgX1xtFz83PavCMk8e78WHKgEdTNYmHj3M6r6sZFVWXoHOF+qSD84ouVCGSrAQqMVDDR7n8ANwXj1SzS7TaUrnI5XBMpBukxeNzpop1x/1EFdxc2sk08WPEqgwVkiTwgFuXEWrSbQYB81J+3Po2vBOq3mw7T8pPJPoNjZu54I/rMKl/OuTTVagU/OKrB1kJXiGlcH5BGMQ5WNhK3LrM55c783+DhqzGxjMdjAqsw5MVciWgQrt4FI5PQiDm150pkl6Kvw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=dsd-govtenders.online; dmarc=pass action=none
 header.from=dsd-govtenders.online; dkim=pass header.d=dsd-govtenders.online;
 arc=none
Received: from CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM (2603:1086:100:3e::11)
 by JN3P275MB2712.ZAFP275.PROD.OUTLOOK.COM (2603:1086:0:bb::5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.8632.27; Fri, 11 Apr 2025 14:08:57 +0000
Received: from CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM
 ([fe80::2910:b44a:8e71:32d3]) by CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM
 ([fe80::2910:b44a:8e71:32d3%7]) with mapi id 15.20.8632.025; Fri, 11 Apr 2025
 14:08:57 +0000
From: zikhona sodika &amp;lt;zikhona.sodika@dsd-govtenders.online&amp;gt;
Subject: Seeking Immediate Service Provider
Thread-Topic: Seeking Immediate Service Provider
Thread-Index: AQHbquq6V9sN+KEKtEiv7XNhKuBfLQ==
Date: Fri, 11 Apr 2025 14:08:57 +0000
Message-ID:
 &amp;lt;CP7P275MB1559C0F9FD8A280351263C30C1B62@CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM&amp;gt;
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
msip_labels:
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: CP7P275MB1559:EE_|JN3P275MB2712:EE_
x-ms-office365-filtering-correlation-id: 8222b1d6-6690-4ae6-b6e6-08dd790266a1
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230040|7416014|376014|1800799024|366016|7055299006|10085299006|8096899003|4053099003|38070700018|27013499003;
x-microsoft-antispam-message-info:
 =?iso-8859-1?Q?AxjxpuYCDzACm2gNeLj/BnAgl1OM/ytXBKbgUBdnMhwUYfA26pWOjj8wDr?=
 =?iso-8859-1?Q?TJxAj5xC3SRATsgMHIgk793p/4gXaylW6WUNP7yJWoBL3rTitRj7wW55k9?=
 =?iso-8859-1?Q?QUo1Za6O2VRX4vR/xG+vCSFCyhb9v9xyCqqR6Q3kT5WRBRUhkUCi5ebap7?=
 =?iso-8859-1?Q?YQ1hji8zSk38HVYpT9uBAz3EYaw449CzJhO4krqdDdM6NPvCHLDnyzqu2g?=
 =?iso-8859-1?Q?wcF9sCC/mkXa2CHUlYq2Dp3x2YC3l/e4xgwLrb7uOXIk+DLG/iZCweT9yn?=
 =?iso-8859-1?Q?VHsxyvFaNdeQhkEZ33tAjQe3074EFrHJy8KWN62LhLTYKeg+Gbmn0sDAE8?=
 =?iso-8859-1?Q?lg39svLPGn9WtXP8iG3alSqo75EB7jvytLcC7+W/+V9lZYbi+8QtAj8jiG?=
 ...(edited)
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016)(7055299006)(10085299006)(8096899003)(4053099003)(38070700018)(27013499003);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?0RR4dLnFtVa6l9QD5Tpv9ekvnvy2FTzNyR8akfH+be3QnwSCGUEZ4eXfXi?=
 =?iso-8859-1?Q?aZJJObThtTk9Pt+CmijcRVBd3iMorsQt5X+zsptyfH7Ph4KcvPY85ZSH4l?=
 ...(edited)
Content-Type: multipart/mixed;
    boundary="_004_CP7P275MB1559C0F9FD8A280351263C30C1B62CP7P275MB1559ZAFP_"
MIME-Version: 1.0
X-OriginatorOrg: dsd-govtenders.online
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: CP7P275MB1559.ZAFP275.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-Network-Message-Id: 8222b1d6-6690-4ae6-b6e6-08dd790266a1
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Apr 2025 14:08:57.5088
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 9a580612-8441-4101-9b38-0037105854c3
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: l0CMIWIqn/mxzNqrfAaeHucbnB7k8AqY0/JD7yiFFYIT1pVEc6OhvUUaXp7qBgmJIf2OeOiilUfDDCgFofU93fo27V14SGx0G24aWdTzbj23HU/6uZVaCoSit8GrwHGa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: JN3P275MB2712
X-Spam-Status: Yes, score=8.10
X-Spamd-Bar: ++++++++
X-Spam-Level: ********
X-Spam: Yes

--_004_CP7P275MB1559C0F9FD8A280351263C30C1B62CP7P275MB1559ZAFP_
Content-Type: multipart/alternative;
    boundary="_000_CP7P275MB1559C0F9FD8A280351263C30C1B62CP7P275MB1559ZAFP_"

--_000_CP7P275MB1559C0F9FD8A280351
&lt;/pre&gt;
&lt;/div&gt;

&lt;p&gt;This email used a "fake" domain, &lt;code&gt;dsd-govtenders.online&lt;/code&gt;&amp;nbsp; with request to quote on a fake tender hoping that the recipient doesn't look to closely at the sending email domain.&amp;nbsp; The domain&amp;nbsp;&lt;code&gt;dsd-govtenders.online&lt;/code&gt; is a "legitimate" domain owned by the scammer. Since they own the domain they can generate SPF, DMARC and DKIM records for the domain.&amp;nbsp; They hope that the SPF passing is enough to convince you it is legitimate.&lt;/p&gt;

&lt;p&gt;Despite this the Authentication-Results show that there is no dkim nor dmarc set up:&lt;/p&gt;

&lt;pre&gt;Authentication-Results: sanitised-domain-three.com;
    dkim=none;
    arc=pass ("microsoft.com:s=arcselector10001:i=1");
    dmarc=none;
    spf=pass (sanitised-domain-three.com: domain of zikhona.sodika@dsd-govtenders.online designates 40.107.141.136 as permitted&lt;/pre&gt;

&lt;p&gt;This is only slightly better than some who try to set it up DKIM and DMARC but get it wrong. The syndicates like using&amp;nbsp; godaddy hosting which is a common indicator of these tender scam emails.&lt;/p&gt;

&lt;p&gt;The use of exchange by godaddy may also give the email a false sense of legitimacy. I mean Exchange's anti-spam test passes with flying colours right? The fact that one needs a credit card and other identification to register with godadddy makes it even more perplexing why law-enforcement can't seem to stop these criminals.&lt;/p&gt;

&lt;p&gt;We can checking the SPF records for&amp;nbsp;&lt;code&gt;dsd-govtenders.online&lt;/code&gt; :&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dig dsd-govtenders.online TXT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We get the response:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;dsd-govtenders.online.&amp;nbsp;&amp;nbsp; &amp;nbsp;3600&amp;nbsp;&amp;nbsp; &amp;nbsp;IN&amp;nbsp;&amp;nbsp; &amp;nbsp;TXT&amp;nbsp;&amp;nbsp; &amp;nbsp;"v=spf1 include:secureserver.net -all"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;All the servers are owned by godaddy. There is no DKIM header and there is no DMARC:&lt;br&gt;
&lt;br&gt;
&lt;code&gt;dig _dmarc.dsd-govtenders.online TXT&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;No DNS records are returned. Looking at when the domain was resisted with:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;whois&amp;nbsp;dsd-govtenders.online&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can extract the following date:&lt;br&gt;
&lt;code&gt;Creation Date: 2025-02-06T17:12:24.0Z&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Hmmm doesn't look legite.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p class="mb-3" itemprop="articleBody"&gt;SPF, DKIM, and DMARC are critical for combating email spoofing, but they’re only effective when properly configured. South Africa’s tender scam syndicate exploits missing DMARC records and lax configurations to deceive organisations. By analysing email headers and DNS records, you can identify these scams. Stay tuned for Part 2, where we’ll explore additional red flags like suspicious links and social engineering tactics.&lt;/p&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-13T10:35:00Z</dc:date>
  </entry>
  <entry>
    <title>Application Password Best Practice</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14187203" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14187203</id>
    <updated>2025-04-13T08:40:04Z</updated>
    <published>2025-04-13T08:24:00Z</published>
    <summary type="html">&lt;p class="submitted"&gt;Submitted by Mark Clarke on Wed, 06/10/2015 - 08:57&lt;/p&gt;

&lt;div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;p&gt;Note: This article is old and needs to be updated but is reproduced here for history. It doesn't cover API tokens, oAuth2/OpenID, Kerberos etc.&lt;br&gt;
&lt;br&gt;
Our &lt;strong&gt;&lt;a href="http://www.jumpingbean.co.za/Certified-Ethical-Hacker-Training" target="_blank"&gt;Certified Ethical Hacker CEH training course&lt;/a&gt;&lt;/strong&gt; is attended by all types, from system administrators and application developers to cyber security professionals. A common question from developers is how to implement secure passwords in applications when there is no LDAP, Kerberos or Active Directory integration.&lt;/p&gt;

&lt;p&gt;This question comes up in the sections of the CEH training covering password cracking techniques, password algorithms and best practice so its appropriate to cover what these are first.&lt;/p&gt;

&lt;div class="align-items-center d-flex justify-content-center"&gt;
&lt;div class="btn btn-light"&gt;&lt;strong&gt;&lt;a href="https://cybersecuritytraining.tech/w/eccouncil/certified-ethical-hacker-ceh-training" target="_blank"&gt;Certified Ethical Hacker (CEH) Training &lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&amp;amp;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://cybersecuritytraining.tech/w/eccouncil/certified-devsecops-engineer" target="_blank"&gt;Certified Security Analyst Training (ECSA)&lt;/a&gt;&lt;/strong&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;&amp;nbsp;&lt;/h2&gt;

&lt;h2&gt;Password History&lt;/h2&gt;

&lt;p&gt;Back-in-the-day it was thought it was enough to merely hash user passwords in a one-way encryption algorithm such as MD5. If the password was revealed, for example via someone getting read access to the password database or via an unencrypted network connection, assuming the password is being transferred encrypted, the perpetrator would not have access to the password.&lt;/p&gt;

&lt;p&gt;A lot of protocols and application still use password authentication protocol (PAP) where plain text passwords are passed around until they reach the password database where they are encrypted and then compared to the stored encrypted password for authorisation.&lt;/p&gt;

&lt;p&gt;In early versions of Unix, for example, encrypted password were stored in the world-readable /etc/passwd file until they were moved out to the shadow password file readable only by root. It was soon realised that this was not a good idea. The first step is never to let anyone get hold fo the encrypted password :)&lt;/p&gt;

&lt;h2&gt;Password Encryption&lt;/h2&gt;

&lt;p&gt;The principle is, given a sound encryption algorithm, the amount of computing time required to crack an encrypted password using that algorithm should makes it infeasible or uneconomic to perform. But, as computing power increases, so the algorithms used to encrypt need to be enhanced or changed to keep the cost curve high enough. Additionally some formally "secure" algorithms are found to have fatal flaws some time after adoption.&lt;/p&gt;

&lt;p&gt;MD5, for example, is now considered insecure due to a flaw in the algorithm and as a way has been found to generate collisions, ie the same hash is generated for different input.&lt;/p&gt;

&lt;p&gt;SHA1 algorithms are still considered secure but as computing power is increasing it is consider a matter of time (years) before computers are commonly available that can crack a SHA1 password. Everyone is upgrading to SHA2.&lt;/p&gt;

&lt;h2&gt;"Uncrackable" Encryption Algorithm&lt;/h2&gt;

&lt;p&gt;So, assuming you have an "uncrackable" algorithm, is it enough for password security? The answer is, of course, no. If someone has an encrypted password they can simply attempt to brute force it. i.e guess the password by trying different combination of characters until they get the matching hash, hence those minimum password requirements.&lt;/p&gt;

&lt;p&gt;Even if proper password policies are enforced such as minimum length of 10 characters or more with a combination of upper case and lower case etc password as still vulnerable simply because people will tend to use common combinations of letters and characters, thereby limiting the key space to a size that becomes feasible to crack.&lt;/p&gt;

&lt;h2&gt;Rainbow Tables&lt;/h2&gt;

&lt;p&gt;Still brute forcing passwords with a good policy is time consuming. To speed up attacks hash password can be pre-generated and then compared against any encrypted password via lookup to check for a match. This speeds up password "cracking" by orders of magnitude as one no longer needs to laboriously encrypt every combination in the key space and then compare it to the actual password aka brute forcing.&lt;/p&gt;

&lt;p&gt;These pre-generated tables of passwords are commonly called rainbow tables and are one of the first things run against a password database once obtain.&lt;/p&gt;

&lt;h2&gt;Password Salting and Hashing&lt;/h2&gt;

&lt;p&gt;So even if you force your users to use strange combinations of keys, users will probably use the common special characters of "!","#" and digits and common passwords across applications and services. If I get your password from a service that is less secure I can then simply compare encrypt it with a different algorithm and compare it to another database for a match.&lt;/p&gt;

&lt;p&gt;An technique to work around rainbow tables, reused or common passwords, that heightens the cost of cracking is to use what is referred to as a salt. A salt is a random combination of characters used as a prefix or suffix to a user password which is then encrypted and stored. The salt also needs to be stored along with the password otherwise it would not be possible for the algorithm to generated the encrypted password again.&lt;/p&gt;

&lt;p&gt;The benefits of this is that rainbow tables cannot be used, forcing the person trying to crack the password back to brute-forcing it. The important point to note is that the salt is kep along with the password so if the database is stolen the perpetrator has access to the salt as well. So they can still brute force passwords but not use rainbow tables. (Unless of course you slat generator has a limited key space too and then they just try and pre-generate all combinations thereof.)&lt;/p&gt;

&lt;h2&gt;Application Password Best Practice&lt;/h2&gt;

&lt;p&gt;So to implement a good password scheme for your application you need:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;A good algorithm, preferably one which can increase cost of computation over time without requiring algorithm changes&lt;/li&gt;
	&lt;li&gt;A good salting generator and a&lt;/li&gt;
	&lt;li&gt;A protocol to store the salt and encrypted password&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Bcrypt Algorithm and Password API&lt;/h2&gt;

&lt;p&gt;The state-of-the-art encryption algorithm for password encryption, that is also easy to use, is the Bcrypt algorithm.&lt;/p&gt;

&lt;p&gt;Bcrypt is based on the Blowfish cipher, uses salts to encrypt passwords and is an adaptive algorithm. The algorithm incorporate an iteration count, the number of times the password is hashed, that can be increased over time. This ensures that the cost to compute the encrypted password increases even as computing power increases simply by increasing the number of iterations.&lt;/p&gt;

&lt;p&gt;As an application developer you will not have to change any code to generate stronger encryption, simply read the number of iterations from a configuration file to generate or regenerate an encrypted password and you good to go. Of course this assume there is no flaw discovered in Blowfish at some point.&lt;/p&gt;

&lt;p&gt;The are many libraries out there for bcrypt for your favorite language. We use the PHP and Java libraries and using it couldn't be simpler. Below is an example in Java.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;import org.mindrot.jbcrypt.BCrypt;

public class UserService {

          public boolean savePassword(User user,String password){
                ...                   
                String enc = BCrypt.hashpw(password,BCrypt.gensalt(20));
                ...  
          }

          public boolean authenticate(String username,String password){
                ...
                String enc = getUserPassword(username);
                if (enc!=null){
                        return BCrypt.checkpw(password,enc));
                } else{
                        return false;
                }
          }
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Just remember to make it hard to get to the user database in the first place.&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-13T08:24:00Z</dc:date>
  </entry>
  <entry>
    <title>Process Substitution - Linux Expert Tip</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14187056" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14187056</id>
    <updated>2025-04-13T08:19:05Z</updated>
    <published>2025-04-13T07:36:00Z</published>
    <summary type="html">&lt;p&gt;So I am sure you have all heard of, and used, &lt;a href="https://wiki.bash-hackers.org/syntax/expansion/cmdsubst" target="_blank"&gt;command substitution&lt;/a&gt; and &lt;a href="https://wiki.bash-hackers.org/syntax/pe" target="_blank"&gt;parameter substitution or expansion&lt;/a&gt; but process substitution?&lt;/p&gt;

&lt;div class="d-flex justify-content-center"&gt;
&lt;div class="btn btn-light"&gt;&lt;strong&gt;&lt;a href="https://linuxcertification.co.za/lpi" target="_blank"&gt;LPI Linux Training By Professionals&lt;/a&gt;&lt;/strong&gt;&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Process substitution allows you to send stdout of one process to stdin of another process. Bah you say! This is just piping stdout to stdin and can be done simply with pipes (|). What is so awesome about chaining together standard streams?&lt;/p&gt;

&lt;p&gt;Yes, it is true that pipes can be used to pump the stdout of one process to stdin of another and is trivial to implement, but what if you have a command that takes two inputs not just one? Or if you need to branch of processing into two different pipelines i,e duplicate the intermediate results on stdout and process them differently from then on?&lt;/p&gt;

&lt;p&gt;Since you can only redirect one stream's stdout or stdin piping won't help. What happens if the command reads from or writes to a file only and not stdin/stdout? In this case you are left with creating temporary files to hold the intermediate output for consumption by subsequent commands. But what is I told you there is an easier way?&lt;/p&gt;

&lt;p&gt;Process substitution allows the stdout of one command to appear as a file for consumption to subsequent commands, or allows a command that expects to write to a file to write to the stdin of a subsequent command instead.&lt;/p&gt;

&lt;h3&gt;Using files to store intermediate results&lt;/h3&gt;

&lt;p&gt;Here is an example if we do not use process substitution. The &lt;strong&gt;diff&lt;/strong&gt; command takes two files as input and outputs the difference between the two files. Lets say we wanted to get the difference between two directories. We would need to do something like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ls -l /home/tux &amp;gt; file1 ls -l /home/tuxbk &amp;gt; file2 diff -u file1 file2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We couldn’t just pipe the commands together like so:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ls -l /home/tux | ls /home/tuxbk | diff -u&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But we can do this all in one line with process substitution&lt;/p&gt;

&lt;h3&gt;Process Substitution Syntax&lt;/h3&gt;

&lt;p&gt;First lets look at the general process substitution syntax. The general syntax for process substitution is &amp;lt;(command) for reading from a command's stdout instead of a file and &amp;gt;(commands) for writing to a processes stdin instead of a file. Note: there is no space between the angle bracket and the left brace! Let first looks at a simple, and useless, example:&lt;/p&gt;

&lt;p&gt;We can run wc as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ls -l | wc&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outputs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;168 1512 13430&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above can be done as follow using process substitution:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;wc &amp;lt;(ls -l)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outputs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;168 1512 13430 /dev/fd/63&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The results are the same but notice the file descriptor handle (/dev/fs/63) that is displayed in the output for the substitution syntax. You can think of it as process substitution creating a temporary file to handle the intermediate results for consumption by a later command This (/dev/fd63) is the intermediate file created to send the results between two processes.&lt;/p&gt;

&lt;h3&gt;Process Substitution - Reading from a process's stdout instead of a file&lt;/h3&gt;

&lt;p&gt;So now lets looks at something more interesting. I introduced the diff command above to display the changes between two directories. with process substitution we can now do this in one line:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;diff -u &amp;lt;(ls -l /home/tux) &amp;lt;(ls -l /home/tuxbk)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Above we have replaced the two file descriptors, which the diff command will read from, with the output of two processes.&lt;/p&gt;

&lt;h3&gt;Some Awesome Process Substitution Examples&lt;/h3&gt;

&lt;p&gt;The power of process substitution can be further demonstrated with the &lt;strong&gt;join&lt;/strong&gt; command. The join command takes two sorted files as input and joins matching lines on the first to lines in the 2nd file, discarding lines which do not match.&lt;/p&gt;

&lt;p&gt;Armed with this we could join the output of the &lt;strong&gt;top&lt;/strong&gt; command to the output of the &lt;strong&gt;iotop&lt;/strong&gt; command. Note if you are using sudo with a password you will need to prime the cache with the users password to avoid an error when being prompted on the first run. You will also have to install &lt;strong&gt;iotop&lt;/strong&gt; if you don't have it already.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;join &amp;lt;(top -b -n 1 | sed '1,7d'| sort -n) &amp;lt;(sudo iotop -P -b -n 1 | sed '1,3d' | sort -n)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above command matches pid from top with pids from iotop and combines the output. There are simpler ways of doing this with sar but it does illustrate the point quite nicely.&lt;/p&gt;

&lt;h3&gt;Splitting your process pipeline with Tee and Process Substitution&lt;/h3&gt;

&lt;p&gt;And the best place to use process substitution is with the &lt;strong&gt;tee&lt;/strong&gt; command. Traditionally we use the tee command to pipe intermediate output to a file and then continue to process the output down our pipeline but with process substitution we can do even better!&lt;/p&gt;

&lt;h3&gt;Process Substitution - Writing to process's stdout instead of a file&lt;/h3&gt;

&lt;p&gt;With tee and process substitution we can split output from a series of commands into two parallel streams of execution. First a simple example of the tee command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grep -i "error" /var/log/syslog | tee errors.txt | grep apache &amp;gt; apache.error&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The above simply greps for the pattern "error" and then saves all lines to errors.txt and then we extract just those error lines that relate to apache. But what if we really wanted to extract errors for mysql into a separate file as well as the apache errors? We can use process substitution:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;grep -i "error" /var/log/syslog | tee &amp;gt;(grep mysql &amp;gt;mysql.error) | grep apache &amp;gt; apache.error&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Another example to calculate MD5Sums and SHA256Sums of files in a directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;find ./ -type f | tee &amp;gt;(xargs -n1 md5sum &amp;gt;md5sums.txt)| xargs -n1 sha256sum &amp;gt;sha256sums.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will notice that we have redirected stdout in the tee command to a file. If you didn't do this then the stdout of the process substitution commands would be piped to the subsequent commands too - which mean you will have a fork and join kind of pipeline!&lt;/p&gt;

&lt;h2&gt;Process Substitution - The Best Kept Shell Secret!&lt;/h2&gt;

&lt;p&gt;Isn't that awesome? Now go forth and do magic in the world!&lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-13T07:36:00Z</dc:date>
  </entry>
  <entry>
    <title>Ghost in the subshell</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14186876" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=14186876</id>
    <updated>2025-04-13T08:01:33Z</updated>
    <published>2025-04-13T07:03:00Z</published>
    <summary type="html">&lt;h1&gt;Ghost in the subshell&lt;/h1&gt;

&lt;p class="submitted"&gt;Submitted by Mark Clarke on Thu, 08/03/2017 - 14:27&lt;/p&gt;

&lt;div&gt;
&lt;p&gt;Bash is a deceptively simply interpreter, seemingly easy to use but harder to master its subtleties. But this is what makes free software so much more fun and intellectually stimulating than stuff rushed out a corporate door with deadlines and minimal viable product mentality.&lt;/p&gt;

&lt;h2&gt;Bash Pipelines and Subshells&lt;/h2&gt;

&lt;p&gt;One of those shell subtleties that takes a while to grok is subshells and in this case subshells invoke via pipes. Most people are familiar with bash pipes and redirection of "stdin" and "stdout". If not you should attend one of our &lt;a href="https://linuxcertification.co.za" target="_blank"&gt;Linux training courses&lt;/a&gt; &amp;nbsp;:).&lt;/p&gt;

&lt;p&gt;With pipes we can architect solutions from various components (commands &amp;amp; shell functionalities etc) to perform a task impossible to achieve with a single tool alone, and for which no specific utility exists. One subtlety about pipes is that, besides redirecting input and output streams, the shell invoke a subshell for each command in the pipeline. i.e child processes that run in parallel with each other rather than serially are created. Lets look at a trivial example using pipes.&lt;/p&gt;

&lt;p&gt;When we see a pipeline such as&lt;/p&gt;

&lt;p&gt;cat /etc/passwd | tr a-z A-Z&lt;/p&gt;

&lt;p&gt;instinctively we assume the "cat" command is run, then its output is passed to "tr" and then the "tr" command is run. Although it might appear to us that the commands are running serially this is a misconception. In fact both commands are running concurrently, it is just that the command on the right is waiting on its standard input stream for data, which it is getting from the command on the left. To prove this we can use the a pipeline with two where the right command is not dependent on the left commands output.&lt;/p&gt;

&lt;p&gt;To demo this we will also introduce the "time" commands. As it's name suggests this provide timing information on commands. e.g.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;time sleep 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;should show an elapsed time slightly longer than 2 seconds. So if we took two sleep commands and piped them together like so&lt;/p&gt;

&lt;p&gt;&lt;code&gt;time sleep 2 | sleep 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;What time would we expect to see? If they ran serially then it would be slightly more than 4 seconds&lt;br&gt;
but it they run in parallel the time would be slightly more than &lt;strong&gt;2 seconds&lt;/strong&gt;. As you see if you run the command they time is slightly more than 2 seconds - showing the commands are running in parallel.&lt;/p&gt;

&lt;h2&gt;Improve Bash Performance with Subshells&lt;/h2&gt;

&lt;p&gt;So how can we use this functionality? Lets say you have a long list of numbers you need to sum, how would you do this in bash and how can we optimise it?&lt;/p&gt;

&lt;p&gt;First some explanations on our examples to follow so that we can separate out new commands from the subshell concept we are trying to show . To simulate a "list of numbers" we will generate a sequence from 1 to 9,000,000 using the "seq" command, and then add them sum them all up. ("seq 1 9000000")&lt;/p&gt;

&lt;p&gt;Yes, that's right basically we are going to calculate the sum of a range of integers. We are using a "large" range from 1 to 9 million so we can actually see the difference when we optimise our initial approach later. For a processor doing billions of operations a second this workload is no challenge. On faster machines you may need to up the range to be able t observe any improvements.&lt;/p&gt;

&lt;p&gt;We could, of course, do this calculation in constant time O(c) using Gauss's formula but that would reduce the didactic value of this post :) So to add up the range generated by "seq" we will use "awk" and the "time" command to get some readings for comparison.&lt;/p&gt;

&lt;h3&gt;Awk, Seq and Some Output Redirection&lt;/h3&gt;

&lt;p&gt;Note because we are using a sequence of commands and passing them to the "time" command we need to let "time" know we want to measure the execution time of the entire pipeline not just the first command so we have to wrap our command up in "sh -c" call. We also have to escape the "awk" script so bash doesn't try and interpret the $ variables as bash variables. Later when we put this in a script it will be much better.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;time bash -c "seq 1 9000000 | awk 'BEGIN { SUM=0 } { SUM=SUM+\$0 } END{ printf \"%13.0f\", SUM }'"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Running this 4 times on my desktop gave me an average time of &lt;strong&gt;0.7355s&lt;/strong&gt;. We can check the accuracy of the calculation with Gauss's formula -&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$(( 9000000(9000000+1)/2 ))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So how can we speed this up? As already stated pipelines do more than "funnel stdout to stdin", they also invoke each command in a subshells or child process of the parent process. Essentially both commands are running in parallel as we explained above. The entire pipeline exits once all subshells processes return. So if there are no dependencies then the longest running process in the pipeline determines the run time of the pipeline.&lt;/p&gt;

&lt;h2&gt;Subshells, Scatter Gather and Named Pipes&lt;/h2&gt;

&lt;p&gt;We can use the fact that each command is running in a separate process to split up our number ranges into several concurrent processes. have them summed individual subsets and then gather the results and aggregate them in a map/reduce scatter/gather type of algorithm.&lt;/p&gt;

&lt;p&gt;We are going to achieve this by dividing our sequence into 3 subsets and launching explicit subshells for each data set and sum the subsets up. We will use a pipe to gather the results of the subshells and aggregate the results. (If this was a file containing a long list of numbers we could use "sed -n p1,x numbers.txt" to apportion the file.)&lt;/p&gt;

&lt;p&gt;First lets create a pipe with the "mkfifo" command. A named pipe is just like a bash pipe except its lifetime is independent of any command that us it, it can be written to by more than one command at a time and read from by more than one command at a time. We are going to use this ability to be written to by more than one command to gather the results of our separate subshells.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mkfifo ~/pipe&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With our pipe established we can use it in our process as follows:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;time bash -c "seq 1 3000000 | awk 'BEGIN { SUM=0 } { SUM=SUM+\$0 } END { printf \"%13.0f\\n\", SUM }' &amp;gt; ~/pipe | seq 3000001 6000000 | awk 'BEGIN { SUM=0 } { SUM=SUM+\$0 } END { printf \"%13.0f\\n\", SUM }' &amp;gt; ~/pipe | seq 6000001 9000000 | awk 'BEGIN { SUM=0 } { SUM=SUM+\$0 } END { printf \"%13.0f\\n\", SUM }' &amp;gt; ~/pipe | awk 'BEGIN {SUM=0} {SUM=SUM+\$0} END {printf \"%13.0f\" ,SUM }' &amp;lt;~/pipe"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Sjo, that's a lot of text so lets break it down. The first 3 core command sequences are just the same as our initial command except the results are redirected to the named pipe. Each "step" handles a subset of the range 1 to 9,000,000.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;seq 1 3000000 | awk 'BEGIN { SUM=0 } { SUM=SUM+\$0 } END { printf \"%13.0f\\n\", SUM }' &amp;gt; ~/pipe&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Since the stdout of each one fo these steps is written to a pipe they all run independently. In fact each "step" invokes two "linked" subshells to get its job done.&lt;/p&gt;

&lt;p&gt;The last step is aggregating the totals from the 3 steps by reading from the pipe&lt;/p&gt;

&lt;p&gt;&lt;code&gt;awk 'BEGIN {SUM=0} {SUM=SUM+\$0} END {printf \"%13.0f\" ,SUM }' &amp;lt;~/pipe&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;Improved Performance with Subshells&lt;/h2&gt;

&lt;p&gt;On my system the average time for 4 iterations was &lt;strong&gt;0.27325s.&lt;/strong&gt; This is a massive improvement over the &lt;strong&gt;0.7355s &lt;/strong&gt;it took to complete the task in a single subshell. Its more than 2.5 times faster.&lt;/p&gt;

&lt;p&gt;To make this easier to read and less repetitive we can put the subset sum step into a function and invoke it in a script.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#!/bin/bash set -e set -o pipefail function sum { seq $1 $2 | awk 'BEGIN { SUM=0} {SUM=SUM+$0 } END { printf "%13.0f\n", SUM }'; } sum 1 3000000 &amp;gt; ~/pipe | sum 3000001 6000000 &amp;gt; ~/pipe | sum 6000001 9000000 &amp;gt; ~/pipe | awk 'BEGIN {SUM=0} {SUM=SUM+$0} END {printf "%13.0f\n" ,SUM }' &amp;lt;~/pipe exit 0;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Understanding subshells in Bash is one concept which takes time to really grok. Its quick to pick up the pipe character as a way to redirect stdin and stdout but it is so much more subtle than just that. There are commands such as gnu parallels which take parallel processing to a whole new level.&lt;/p&gt;
&lt;/div&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2025-04-13T07:03:00Z</dc:date>
  </entry>
  <entry>
    <title>The Future of Tech:Fast-Growing Career Options</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=8164517" />
    <author>
      <name>Bophelo-Botle Makuzeni</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=8164517</id>
    <updated>2024-02-27T10:51:11Z</updated>
    <published>2024-02-26T07:50:00Z</published>
    <summary type="html">&lt;p&gt;Careers in Information Technology (IT) offer unparalleled opportunities for growth, innovation, and impact. From &lt;a href="https://cybersecuritytraining.tech" rel="noopener noreferrer" target="_blank"&gt;cybersecurity&lt;/a&gt; to &lt;a href="https://cloudconsulting.africa" rel="noopener noreferrer" target="_blank"&gt;cloud computing&lt;/a&gt;, &lt;a href="https://jumpingbean.co.za/we-build" rel="noopener noreferrer" target="_blank"&gt;software development&lt;/a&gt; to data analysis, the realm of IT encompasses a vast array of specialized roles that play crucial roles in driving the digital transformation of businesses and organizations worldwide. As technology continues to permeate every aspect of our lives, the demand for skilled IT professionals is skyrocketing, making it an exciting and lucrative field for aspiring individuals.&lt;/p&gt;

&lt;p&gt;However, with such diversity and dynamism comes the challenge of navigating the multitude of career paths, certifications, and specializations available. In this comprehensive guide, we delve into the multifaceted world of IT careers, exploring the myriad opportunities, challenges, and strategies for success in this ever-evolving industry.&lt;/p&gt;

&lt;p&gt;Whether you're a seasoned professional looking to advance your career or a newcomer seeking to break into the field, this article aims to provide invaluable insights and guidance to help you thrive in the dynamic realm of IT. We are here to assist with your &lt;a href="https://itcareerkickstarter.co.za/" rel="noopener noreferrer" target="_blank"&gt;IT career&lt;/a&gt;. &lt;a href="https://itcareerkickstarter.co.za/#contactus" rel="noopener noreferrer" target="_blank"&gt;Contact Us&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://jumpingbean.ucertify.com/" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;AI and Machine Learning Specialist&lt;/strong&gt;&lt;/a&gt;: With AI and machine learning becoming integral to various industries, specialists in this area are in high demand. Careers focus on developing algorithms that can learn from and make predictions on data.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Quantum Computing Scientist&lt;/strong&gt;: As quantum computing moves from research to reality, professionals with expertise in quantum mechanics and computer science are needed to develop new algorithms and applications.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://cybersecurityservices.tech" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;Cybersecurity&lt;/strong&gt;&lt;/a&gt;: With the increasing frequency and sophistication of cyber-attacks, cybersecurity experts are crucial for protecting sensitive information across all sectors, from government to healthcare.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://itcareerkickstarter.co.za/courses?filter_category_6985804=6885527" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;Data Scientist and Analysts&lt;/strong&gt;&lt;/a&gt;: The ability to analyze and derive insights from big data remains a highly valued skill, with applications in every sector from finance to healthcare to marketing.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Blockchain Developer&lt;/strong&gt;: Beyond cryptocurrencies, blockchain technology offers applications in secure and transparent transaction processes, supply chain management, and more, leading to increased demand for skilled developers.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://googlecloudtraining.net/cloud-architect" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;Cloud Solutions Architect&lt;/strong&gt;&lt;/a&gt;: As businesses continue to migrate to cloud services for scalability, flexibility, and efficiency, architects who can design and manage cloud computing strategies are essential.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://jumpingbean.co.za/" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;IoT Solutions Specialist&lt;/strong&gt;&lt;/a&gt;: With the proliferation of connected devices, specialists who can develop and manage IoT solutions are needed to harness the power of this technology for smart homes, cities, and industries.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://jumpingbean.co.za/we-train#cybersecurity" rel="noopener noreferrer" target="_blank"&gt;&lt;strong&gt;Ethical Hacker or Penetration Tester&lt;/strong&gt;&lt;/a&gt;: Professionals who can ethically breach systems to identify security weaknesses and vulnerabilities are critical in the fight against cybercrime.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Augmented Reality (AR) and Virtual Reality (VR) Developer&lt;/strong&gt;: With applications in entertainment, education, and training, AR and VR developers are in demand to create immersive experiences.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Sustainability in Tech Expert&lt;/strong&gt;: As the focus on environmental sustainability grows, professionals who can integrate green practices into technology design, development, and deployment are becoming increasingly important.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Digital Transformation Consultant&lt;/strong&gt;: Experts who can guide businesses through digital transformation, helping them to implement new technologies and digital practices, are crucial for companies looking to stay competitive.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Privacy Officer/Consultant&lt;/strong&gt;: With increasing regulations around data privacy (e.g., GDPR, CCPA), professionals who can navigate the legal and technical aspects of data privacy are essential.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;strong&gt;Edge Computing Specialist&lt;/strong&gt;: As edge computing grows to process data closer to where it's generated for improved speed and efficiency, specialists in this area are needed.&lt;/p&gt;
	&lt;/li&gt;
&lt;/ol&gt;</summary>
    <dc:creator>Bophelo-Botle Makuzeni</dc:creator>
    <dc:date>2024-02-26T07:50:00Z</dc:date>
  </entry>
  <entry>
    <title>Managing the Risk of OpenJDK Migration</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=7232459" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=7232459</id>
    <updated>2023-09-06T09:19:41Z</updated>
    <published>2023-09-06T09:06:00Z</published>
    <summary type="html">&lt;p&gt;Life as a Java developer used to be about writing awesome code. Understanding the latest APIs and hottest trends in application design and architecture and how you could leverage this all for fun and profit. For those that wanted to, you could delve into the fascinating or hellish depth of the Java VM, garbage collection and performance tweaking. One looked forward to a new version of Java and all the goodness it brought. It was not like those other runtimes where every release meant work to update your code base just to get what you already had. If you had the inclination you could recompile to get some performance boosts and it didn't require too much bureaucratic nonsense to upgrade from one version to the next and which distribution to use. DevOps did away with those pesky sysops roadblocks too about staying on older versions. In 2017 life got even better we had a new release cycle that promised even more goodness, more often. Times were good.​​​​​​​&lt;/p&gt;

&lt;h2&gt;JDK Licensing - A new headache&lt;/h2&gt;

&lt;p&gt;Then one day it changed, slowly at first.&amp;nbsp; In 2019 Oracle announced it would end free public updates for commercial use of Java 8 implementation of Java and if you installed any updates you would be liable. The writing was on the wall and the future was looking a little darker. There were some more gyrations in 2019 that confirmed the cloudy outlook for Oracle JDK and alternative OpenJDK distributions looked better and better. Then came Covid and lockdowns and we had other things to worry about. We emerged from that dystopian aberration to be hit with more licensing changes. Even to those who hoped that the storm would pass them by, it was now clear one needed to invest more time and energy into the choice of which JDK distribution to use. Keeping up with the constant changes is tiring, not to mention unproductive and looked set to continue. Unless you enjoy this type of thing, In the words of Sweet Brown "Ain't no one got time for that". I once met someone who proudly told me he had passed a vendor licensing exam. Thats right. The licensing was so complex one needed to be certified to be able to sell it. The last thing one needs is audit and compliance knocking at the door asking questions with spreadsheets and PowerPoint presentations.&lt;/p&gt;

&lt;h2&gt;Looking for help to manage your JDK Challenges?&lt;/h2&gt;

&lt;p&gt;Thankfully Simmon Ritter's new book "OpenJDK Migrations for Dummies" is here to help you understand the issues around the new world of Java VM licensing, how to choose an OpenJDK Distribution provider and plan and execute a migration strategy. As a developer, I enjoyed the chapters on "Preparing for Your Migration" and "Migrating Your Applications" the most as they deal with the technical requirements. It's really nice to have a place where all the potential issues with JDK versions are documented in a clear and concise manner.&amp;nbsp; A reference guide so to speak. Appendix B "Optimising the JVM for Lower Latency, Higher Throughput and Faster Warm-up" and Appendix C "Runtime Security" were great too.&lt;/p&gt;

&lt;h2&gt;Manage your OpenJDK financial and compliance risk&lt;/h2&gt;

&lt;p&gt;The remaining chapters deal with the tedious but vitally important task of managing your Java JDK usage. Enterprises need to seriously consider the risk they expose themselves to if they don't undertake an exercise to understand their JDK usage both financially from a licensing point of view and from a cybersecurity and resilience perspective. A team needs to be assembled and a project undertaking to address the potential risks and plot a way forward. The book provides handy checklists and schedules to audit and record your Java usage and how to plan your migration as well as migration strategies. it provides guidance on choosing an OpenJDK Distribution Provider covering issues such as supported versions, release cycles and types and levels of support as well as the all-important issues around cost and compliance. If you are a Java Developer and in management and you need to undertake the unenviable task of assessing your Java environment Simmon Ritter's book is an invaluable guide to making the right choice for your business.&lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-09-06T09:06:00Z</dc:date>
  </entry>
  <entry>
    <title>Upgrading to Flectra 2 - A Journal</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=7104724" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=7104724</id>
    <updated>2024-03-22T13:35:30Z</updated>
    <published>2023-08-16T12:49:00Z</published>
    <summary type="html">&lt;h1&gt;Upgrade to Flectra 2 - A Journal&lt;/h1&gt;

&lt;p&gt;A few years ago, just before Covid, we were looking for an e-commerce engine to start our online shop that was spun out of our main business. We were also looking for an accounting system to replace the Windows-based system that our succession of bookkeepers insisted on using. When our last bookkeeper emigrated we felt it was an opportune time to bring it all in-house and rely on an open-source system where we could take it places we never considered before with a closed-sourced system and, at the same time, begin to offer support and subscriptions from the creators to our customers who would find the product as useful as we did.&lt;/p&gt;

&lt;p&gt;We looked at open-source accounting systems about a decade ago, sans the e-commerce requirement. At the time the main contenders were OpenBravo, Tiny ERP and SQLedger. We didn't settle on any of them at the time. Our risk assessment was as follows:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;OpenBravo&lt;/strong&gt; - too complex, the project did not have a track record yet so it's future was uncertain. The community was still nascent.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;SQLedger&lt;/strong&gt; - Written in Perl, looked like development had stalled and was butt ugly but could meet our limited accounting requirements at the time.&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;Tiny ERP&lt;/strong&gt; - too complex for our requirements at the time and was client based.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these projects would have turned out to be a good option to bet the company accounting system on. OpenBravo, stopped its open-source editions in 2019. Seems they regret the use of Open in their name now. Tiny ERP became Odoo and, as I will expand on,&amp;nbsp; is being closed off section by section and SQLedger went nowhere.&lt;/p&gt;

&lt;p&gt;When it came time to look at this again Tiny ERP, now Odoo, stood out as a great option feature and customisation wise and it was now web-based but its business model left no doubt that the community edition is on notice. While looking we came across Flectra, a fork of Odoo, and it at least seemed to address some of the risks with Odoo. There was a lot of bad stuff being said online but since we were new to the Odoo/Flectra world it was hard to tell who was "in the right". Suffice it to say no one looked clean but I could understand the fork given Odoo's moves to reduce functionality.&amp;nbsp; We wouldn't want to sign up for a project that looked hostile to its community user base.&lt;/p&gt;

&lt;p&gt;The lack of a vibrant community around Flectra and that it was a new, unproven project was a big risk for us but we were under pressure and accepting the risk implemented Flectra.&lt;/p&gt;

&lt;p&gt;Shortly after implementing Flectra 1.6 and then upgrading to 1.7 Flectra 2 was announced. This was not great news for us major version upgrade option from Flectra and Odoot is something we knew was going to happen sooner or later.&amp;nbsp; Unlike Odoo where it is relatively easy to find assistance in the community with Flectra, this is less so. Flectra's main communication channel is Telegram which is a walled garden that cannot be queried nor searched with a browser. Questions need to be answered time and again as there is no history for users to refer to.&amp;nbsp; It's hard to build up a knowledge repository this was which is supposed to be one of the benefits of open-source for adopters.&lt;/p&gt;

&lt;p&gt;This journal is our contribution to the Flectra community and to FlectraHQ to see if we can help build up the expertise and knowledge that a vibrant open-source community can provide and to thank the Flectra developers for their efforts. I am not sure where this journey will end but I am hopeful that with the community expertise, we can assist each other and Flectra with this important project.to get all things sorted. To make sure its clear we have not yet completed the upgrade process but want to document what we have tried so far in the hope it helps others and we can work out the steps required for a successful upgrade as a community.&lt;/p&gt;

&lt;h2&gt;Base Server Setup&lt;/h2&gt;

&lt;p&gt;We set up a virtual machine to test the upgrade process. This would allow us to snapshot and roll back as needed as we experimented with the upgrade. We are using a Ubuntu 20.04 virtual machine as its version of Python and Postgres have fewer issues with Odoo 12,13 and 14 than Ubuntu 22.04. to get all things sorted.to get all things sorted.to get all things sorted.to get all things sorted.to get all things sorted. Our production server is using Ubuntu 18.04 just in case this becomes relevant later.&lt;/p&gt;

&lt;p&gt;The plan is, based on fragmented information online, to upgrade from Flectra 1.7 to Odoo 12, then 13 and 14 before installing Felctra 2. The major version upgrades will be done with the Odoo community tool OpenUpgrade. There was a change in the way OpenUpgrade works from 13 to 14 which we will document, but the high-level steps for each version upgrade are as follows:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Install Odoo.x from deb packages&lt;/li&gt;
	&lt;li&gt;Install OpenUpgrade migration scripts for the relevant version (e.g branch 12.0 for Flectra 1.7 to Odoo 12)&lt;/li&gt;
	&lt;li&gt;Copy over and install the relevant data, and files to be upgraded&lt;/li&gt;
	&lt;li&gt;Update our custom modules&lt;/li&gt;
	&lt;li&gt;Install our standard modules for the Odoo.x version&lt;/li&gt;
	&lt;li&gt;Run the OpenUpgrade process&lt;/li&gt;
	&lt;li&gt;Run the Odoo.x update&lt;/li&gt;
	&lt;li&gt;Start the Odoo.x process and see if it works&lt;/li&gt;
	&lt;li&gt;Repeat till we at the right version for flectra2&lt;/li&gt;
	&lt;li&gt;Update to Flectra2&lt;/li&gt;
	&lt;li&gt;Profit!!!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds like a walk-in-the-park right? Sadly far things don't look good. Here is what we have achieved so far.&lt;/p&gt;

&lt;h2&gt;Getting ready to go&lt;/h2&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h3&gt;Postgresql Setup&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Install&amp;nbsp; Postgres
	&lt;ul&gt;
		&lt;li&gt;sudo apt install postgresql&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Downloaded the database backup file and restore to the VM. In Postgres CLI:
	&lt;ul&gt;
		&lt;li&gt;"create role flectra with login encrypted password 'flectra';"&lt;/li&gt;
		&lt;li&gt;"create database finance with owner flectra;"
		&lt;ul&gt;
			&lt;li&gt;make sure to name the database the same as in production as it is used in file store paths.&lt;/li&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
		&lt;li&gt;"pg_restore -h localhost &amp;nbsp;-U flectra -W -d finance ~/Downloads/finance.dump"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Filestore Setup&lt;/h3&gt;

&lt;ul&gt;
	&lt;li&gt;Download the filestore under "/var/lib/flrectra/.local/share/Flectra/filestore". Probably best to create a tar file and copy that over.&lt;/li&gt;
	&lt;li&gt;Copy the filestore to /var/lib/odoo/.local/share/Odoo/filestore
	&lt;ul&gt;
		&lt;li&gt;"sudo cp -a ./var/lib/flectra/.local/share/Flectra/filestore /var/lib/odoo/.local/share/Odoo/"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Change ownership to Odoo:Odoo
	&lt;ul&gt;
		&lt;li&gt;"sudo chown -R odoo:odoo /var/lib/odoo/.local/share/Odoo/filestore/"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The /var/lib/odoo/.local/share/Odoo/filestore directory will not exist until you run Odoo so you may need to create it or run odoo to create it then stop odoo and copy over the files.&lt;/p&gt;

&lt;h3&gt;Update and Install Custom modules&amp;nbsp;&lt;/h3&gt;

&lt;p&gt;We don't know a whole lot about Odoo and Flectra internals and development but have written some basic modules. This processes is teaching us a whole lot about Flectra's internals we wish we didn't need to know but it all helps in the long-run.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Copy over your custom modules. We grouped them in an addons-upgrade directory.
	&lt;ul&gt;
		&lt;li&gt;"scp -r &amp;nbsp;user@hosting.co.za:/home/user/addons-upgrade &amp;nbsp;./"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Update references to Flectra to Odoo
	&lt;ul&gt;
		&lt;li&gt;"grep -ilR flectra ./ | egrep -v 'git' | xargs -n 1 &amp;nbsp;sed -i 's/flectra/odoo/g'"&lt;/li&gt;
		&lt;li&gt;"grep -ilR 'fpl-1' ./ | grep -v git | xargs -n 1 sed -i 's/FPL-1/OPL-1/';"
		&lt;ul&gt;
			&lt;li&gt;this is to stop complaints about licenses during the upgrade&lt;/li&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Finally, copy over the custom modules to the addons directory
	&lt;ul&gt;
		&lt;li&gt;sudo cp -a ~/addons-upgrade/* &amp;nbsp;/usr/lib/python3/dist-packages/odoo/addons/&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The addons directory will not exist until you install Odoo.&lt;/p&gt;

&lt;p&gt;We had to edit "views.xml" , "templates.xml" and models&amp;nbsp; to accommodate changes between versions of Odoo. This is something that will be different for your custom modules. Our modules are simple so it was mainly related to xpath queries to change the layout templates. We did find that, on occasion, we needed to delete the entries from &lt;strong&gt;ir_ui_view&lt;/strong&gt; in order for changes to templates.xml to be loaded despite running an update.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;"delete from ir_ui_view where id in (1554,2126);"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Install the version of Odoo you are upgrading to&lt;/h2&gt;

&lt;ul&gt;
	&lt;li&gt;Download Odoo&amp;nbsp;
	&lt;ul&gt;
		&lt;li&gt;wget https://nightly.odoo.com/12.0/nightly/deb/odoo_12.0.latest_all.deb&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Install the python dependencies (you can probably skip this and fix it with the last 2 steps)
	&lt;ul&gt;
		&lt;li&gt;"sudo apt install python3-docutils python3-feedparser python3-gevent python3-html2text python3-mock python3-ofxparse python3-passlib python3-psutil python3-pydot python3-pypdf2 python3-serial python3-suds python3-usb &amp;nbsp;python3-werkzeug python3-xlsxwriter node-less &amp;nbsp;python3-vatnumber python3-num2words python3-babel &amp;nbsp;python3-decorator &amp;nbsp;python3-jinja2 python3-mako python3-psycopg2 postgresql-client python3-ldap python3-pip pyldap qrcode xlwt vobject -y"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Install Odoo
	&lt;ul&gt;
		&lt;li&gt;"sudo dpkg -i odoo_12.0.latest_all.deb"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;ul&gt;
		&lt;li&gt;"sudo apt install --fix-broken"
		&lt;ul&gt;
			&lt;li&gt;this will resolve the broken install you will get from the step above
			&lt;ul&gt;
			&lt;/ul&gt;
			&lt;/li&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Edit /etc/odoo/odoo.conf as appropriate
	&lt;ul&gt;
		&lt;li&gt;[options]&lt;br /&gt;
		; This is the password that allows database operations:&lt;br /&gt;
		; admin_passwd = admin&lt;br /&gt;
		db_host = localhost&lt;br /&gt;
		db_port = 5432&lt;br /&gt;
		db_user = flectra&lt;br /&gt;
		db_password = flectra&lt;br /&gt;
		;addons_path = /usr/lib/python3/dist-packages/odoo/addons
		&lt;ul&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;ul&gt;
	&lt;/ul&gt;
	Install these python modules that may be needed later

	&lt;ul&gt;
		&lt;li&gt;"sudo pip install pyldap qrcode xl&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Install OpenUpgrade&lt;/h2&gt;

&lt;p&gt;Next, we need to install OpenUpgrade. The OpenUpgrade library is required by the OpenUpgrade scripts.&amp;nbsp; As far as we can tell this needs to be installed separately from the GitHub repository. Don't use the library that comes with Ubuntu 20.04 instead install the latest version with:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;sudo -H pip3 install --ignore-installed git+https://github.com/OCA/openupgradelib.git@master&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, we need to clone the github repo and switch to the right branch for the upgrade. In this case (Flectra 1.7 (Odoo 11) to Odoo 12)&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;git clone https://github.com/OCA/OpenUpgrade.git&lt;/li&gt;
	&lt;li&gt;git switch -c 12.0 origin/12.0&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Install The Odoo Versions of Any 3rd Party Modules&lt;/h2&gt;

&lt;p&gt;We use the account financial report module amongst others. We didn't find versions of all the 3rd party modules that we used in Flectra. This is probably the cause of some of the issues we have experienced.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;sudo cp account_financial_report-12.0.1.5.2.zip &amp;nbsp;/var/lib/odoo/.local/share/Odoo/addons/12.0/&lt;/li&gt;
	&lt;li&gt;sudo unzip &amp;nbsp;/var/lib/odoo/.local/share/Odoo/addons/12.0/account_financial_report-12.0.1.5.2.zip&lt;/li&gt;
	&lt;li&gt;sudo chown odoo:odoo&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Ready, Set, Go&lt;/h2&gt;

&lt;p&gt;We will assume the steps to upgrade complete for now. As you may suspect, running the upgrades steps is where most of the work lies and we will document the challenges we came across in the following sections.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;Stop the odoo service.
	&lt;ul&gt;
		&lt;li&gt;"sudo systemctl stop odoo"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Run the OpenUpgrade script from within the OpenUpgrade directory.
	&lt;ul&gt;
		&lt;li&gt;"sudo .Openupgrade/odoo-bin -c /etc/odoo/odoo.conf --data-dir /var/lib/odoo/.local/share/Odoo&amp;nbsp; --database finance --update=all --stop-after-init"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Next run the&amp;nbsp; Odoo Upgrade (Note we are running odoo11 not OpenUpgrade binary)
	&lt;ul&gt;
		&lt;li&gt;"sudo odoo -c /etc/odoo/odoo.conf --data-dir /var/lib/odoo/.local/share/Odoo&amp;nbsp; --database finance --update=all --stop-after-init"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Start Odoo
	&lt;ul&gt;
		&lt;li&gt;"sudo systemctl start odoo"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Profit?&lt;/h2&gt;

&lt;p&gt;Not quite. As you can guess the upgrade steps don't go according to plan..&lt;/p&gt;

&lt;h3&gt;Issues Before the Upgrade&lt;/h3&gt;

&lt;p&gt;Before we began the upgrade to Odoo 12.&amp;nbsp; We ran an update on Flectra 1.7 to make sure the database was up-to-date.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;"sudo odoo -c /etc/odoo/odoo.conf --data-dir /var/lib/odoo/.local/share/Odoo&amp;nbsp; --database finance --update=all --stop-after-init"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Felctra 1.7 Theme Update Errors&lt;/h3&gt;

&lt;p&gt;During this process, we got errors updating the themes. Theme Art, Hermit, Techperspective etc were noted as not upgraded due to being incompatible. We got the same error attempting to update them via the web interface. We tried uninstalling them and then reinstalling them. We managed to get Theme Art to install, at least according to the web interface. None of the other themes would install though. Strangely it didn't seem to bother the e-commerce that their themes were missing. We have multiple e-commerce sites and we cannot use the same theme on different sites and so had to install a different theme for each site. So before the upgrade only Theme Art was installed even though other sites where using "Theme Hermit" etc. So there is some issue here that may be causing issues with our upgrade. Anyone else had these problems with theme and Flectra 1.7?&lt;/p&gt;

&lt;h2&gt;Issues Upgrading from Flectra 1.7 to Odoo 12&lt;/h2&gt;

&lt;p&gt;We got several errors during the OpenUpgrade process.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;The process halted with an error about an insert into the name column on the &lt;strong&gt;product_template&lt;/strong&gt; table being null.&amp;nbsp; The insert appears to be part of account_invoice migration. There is a line from a purchase invoice that the process wants to create a product template for. Its for a tip on a lunch bill. I have no idea why. There are two rows with this problem. We got around this by adding a default value to the column with:

	&lt;ul&gt;
		&lt;li&gt;"alter table &lt;strong&gt;product_template&lt;/strong&gt; alter column name set &amp;nbsp;default &amp;nbsp;'change me'";&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;We got errors when the upgrade script tries to delete some lines from &lt;strong&gt;ir_ui_menu&lt;/strong&gt;. The delete fails due to a foreign key on the &lt;strong&gt;menu_bookmark&lt;/strong&gt; table. We delete the entries from &lt;strong&gt;menu_bookmark&lt;/strong&gt;.&amp;nbsp; It might be a good idea to just unbookmark everything in production before backing up the database.
	&lt;ul&gt;
		&lt;li&gt;"delete &amp;nbsp;from &lt;strong&gt;menu_bookmark&lt;/strong&gt; where menu_id=365;" -&amp;gt; Your menu_id will be different.&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;We had an error when the process was trying to delete an line from the &lt;strong&gt;res_country&lt;/strong&gt; table for Romania as it was referenced from res_partners. I found the account and updated its address to point to another country and will come back and switch it back once done.
	&lt;ul&gt;
		&lt;li&gt;"update &amp;nbsp;res_partner set country_id=27, state_id =339 where id = 2879;"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;It complained about duplicates in the product_wishlist so I found and deleted those.
	&lt;ul&gt;
		&lt;li&gt;"delete from product_wishlist where id in &amp;nbsp;(620,572,334);"&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;For one of our modules we needed to update references to the class &lt;strong&gt;WebsiteSalesOption&lt;/strong&gt; with &lt;strong&gt;WebsiteSale&lt;/strong&gt; as this had been merged in 12.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After this the OpenUpgrade process ran to the end but the following modules were not upgraded.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;module base_branch_company&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module account_bank_statement_import_ofx&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module account_discount&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module theme_art&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module currency_rate_update&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module account_asset_management&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module sales_discount&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module account_cash_flow&amp;nbsp;&lt;/li&gt;
	&lt;li&gt;module payment_payweb_3&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will need to check the &lt;strong&gt;ir_module_module&lt;/strong&gt; table to see if we can work out the modules we need here, mapping from Flectra to Odoo.&lt;/p&gt;

&lt;h3&gt;Running Odoo 12&lt;/h3&gt;

&lt;p&gt;I edited my client's host file to point to the upgrade server to ensure that any url would be mapped correctly and then started the Odoo service and access the website with http://url:8069&lt;/p&gt;

&lt;p&gt;The first page looked promising and had only minor style issues but upon logging in numerous errors were presented. We suspect most of these relate to the missing modules.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;When browsing to /shop we get 500 errors. The following is in odoo-server.log
	&lt;ul&gt;
		&lt;li&gt;KeyError: ('ir.qweb', &amp;lt;function IrQWeb.compile at 0x7f96da9f59d0&amp;gt;, 'website_sale.product_view_switcher', ('en_US', None, None, None, False, 2))&lt;/li&gt;
		&lt;li&gt;KeyError: ('ir.ui.view', &amp;lt;function View.get_view_id at 0x7f96d3866dc0&amp;gt;, 4, 'website_sale.product_view_switcher', (2,))&lt;/li&gt;
		&lt;li&gt;ValueError: View 'website_sale.product_view_switcher' in website 2 not found&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;When browsing to /web we get a pop up with:&lt;/li&gt;
	&lt;li&gt;"Could not get content for /account_asset/static/src/less/account_asset.less defined in bundle 'web.assets_backend'."&lt;/li&gt;
	&lt;li&gt;Closing the above pop-up we get other errors depending on the menu item selected. Here are some of them.
	&lt;ul&gt;
		&lt;li&gt;Inventory -&amp;gt; Master Data
		&lt;ul&gt;
			&lt;li&gt;ValueError: Field `asset_category_id` does not exist&lt;/li&gt;
		&lt;/ul&gt;
		&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will be updating this post before documenting our process from 12 to 13. We are unsure if we need to resolve all these problems in 12 before upgrading to 13 and then 14. Can we just ignore the missing modules until we get to Flectra 2 where they will be available again?&lt;/p&gt;

&lt;p&gt;We continued to upgrade from Odoo 12 to Odoo 13 and overcome some obstacles there but we still got a way to go to get to Flectra2 and see if it all works. We will document these in a later post too. Please share your stories and experiences.&lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-08-16T12:49:00Z</dc:date>
  </entry>
  <entry>
    <title>Advancing Your Cybersecurity Career</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6608350" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6608350</id>
    <updated>2024-02-22T08:47:12Z</updated>
    <published>2023-04-10T12:31:00Z</published>
    <summary type="html">&lt;p&gt;In my previous blog post, I set out our recommendations for starting out on your cybersecurity journey dealing with the &lt;a href="https://blogs.jumpingbean.info/blogs/-/blogs/want-a-career-in-cybersecurity-part-1" rel="noopener noreferrer"&gt;fundamental skills required to become a cybersecurity professional&lt;/a&gt;. In this post, we start to look at specifically cybersecurity-focused certifications.&lt;/p&gt;

&lt;p&gt;In the world of cybersecurity, obtaining certifications is one way to demonstrate your knowledge and expertise in the field. There are many different types of certifications available, but they can generally be divided into two streams: technical certifications and management and governance certifications. In this post, we will focus on management and governance certifications that are specifically geared toward cybersecurity professionals.&lt;/p&gt;

&lt;p&gt;Although management and governance certifications require some technical knowledge, they tend to focus more on the business side of cybersecurity. Professionals who hold these certifications often work in management positions and are responsible for overseeing cybersecurity functions within an organization. These professionals tend to be paid more than technical cybersecurity professionals, as their roles require them to interact with business units to align cybersecurity with the organization's goals. This means these certifications are more widely recognized by non-cybersecurity personnel and the value the holders of these certifications bring is easier for them to identify and this, fair or not, translates into higher pay.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Here are some of the top cybersecurity management and governance certifications recommended by Jumping Bean:&lt;/p&gt;

&lt;ol&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://cissptraining.co.za/w/cissp-training-course" rel="noopener noreferrer"&gt;Certified Information System Security Professional (CISSP)&lt;/a&gt;: Offered by the International Information System Security Certification Consortium (ISC)², the CISSP certification is globally recognized and covers eight domains related to cybersecurity management. These domains include security and risk management, asset security, communication and network security, identity, and access management, security assessment and testing, security operations, software development security, and security architecture and engineering.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://isacaaccreditedtraining.com/w/cisa-certification" rel="noopener noreferrer"&gt;Certified Information System Auditor (CISA)&lt;/a&gt;: This certification, offered by ISACA, is designed for professionals interested in auditing, control, and security of information systems. It covers topics such as information system auditing, governance, risk management, and information security management.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://isacaaccreditedtraining.com/w/cism-certification" rel="noopener noreferrer"&gt;Certified Information System Manager (CISM)&lt;/a&gt;: Also offered by ISACA, the CISM certification is designed for professionals interested in cybersecurity management roles. It covers topics such as information security governance, risk management, incident management, and program development and management.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://isacaaccreditedtraining.com/w/cgeit-certification" rel="noopener noreferrer"&gt;Certified in the Governance of Enterprise IT (CGEIT)&lt;/a&gt;: This certification, also offered by ISACA, is designed for professionals who are responsible for managing and governing enterprise IT. It covers topics such as IT governance frameworks, strategic alignment, and value delivery.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://pecbisocertificationtraining.com/w/iso-27001-training-courses" rel="noopener noreferrer"&gt;PECB 27001 - ISMS&lt;/a&gt;: This certification, offered by the Professional Evaluation and Certification Board (PECB), focuses on the implementation and management of an Information Security Management System (ISMS) based on the ISO/IEC 27001 standard.&lt;/p&gt;
	&lt;/li&gt;
	&lt;li&gt;
	&lt;p&gt;&lt;a href="https://pecbisocertificationtraining.com/w/iso-27002-training-courses" rel="noopener noreferrer"&gt;PECB 27002 - Controls&lt;/a&gt;: This certification, also offered by PECB, focuses on implementing and managing information security controls based on the ISO/IEC 27002 standard.&lt;/p&gt;
	&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's important to note that these certifications have prerequisites and require ongoing education and recertification to maintain their validity. As with any certification, it's important to consider your career goals and choose a certification that aligns with those goals. In our next post, we will cover recommended technical certifications for those who prefer technical work.&lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-04-10T12:31:00Z</dc:date>
  </entry>
  <entry>
    <title>Want a career in Cybersecurity?</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6546310" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6546310</id>
    <updated>2023-04-02T07:19:49Z</updated>
    <published>2023-03-04T04:08:00Z</published>
    <summary type="html">&lt;p&gt;As artificial intelligence begins to automate many of the entry-level jobs in IT, the cycle of automation started by the first industrial revolution is disrupting a sector previously thought a safe haven. It has become more important than ever to find opportunities for knowledge workers where demand for skills outstrips supply and where machines cannot yet replace humans.&amp;nbsp; One such sector is cyber security.&lt;/p&gt;

&lt;p&gt;As a subject matter expert at Jumping Bean, I am often asked how one gets started in cyber security. What is the suggested learning path to acquire the skills necessary to be a cybersecurity professional? What &lt;a href="https://cybersecuritytraining.tech/all-training-courses" rel="noopener noreferrer"&gt;cyber security courses&lt;/a&gt; should one attend?&lt;/p&gt;

&lt;p&gt;To start a career in cyber security one needs a basic understanding of computing concepts, administering operating systems and an understanding of the fundamental network protocols. If you are completely new to the field I usually suggest doing a &lt;a href="https://cybersecuritytraining.tech/w/comptia/a-plus-training" rel="noopener noreferrer"&gt;self-paced CompTIA A+&lt;/a&gt;, followed by a &lt;a href="https://cybersecuritytraining.tech/w/comptia/n-plus-training" rel="noopener noreferrer"&gt;self-paced N+ course from CompTIA&lt;/a&gt;. CompTIA's A+ covers basic computing concepts such as hardware, operating systems and security. CompTIA N+ covers networking concepts in more detail and is essential for cybersecurity specialists. To get the knowledge to administer operating systems we suggest &lt;a href="https://linuxcertification.co.za/w/lpi/linux-essentials" rel="noopener noreferrer"&gt;Linux Essentials&lt;/a&gt; from the Linux Professional Institute or &lt;a href="https://cybersecuritytraining.tech/w/comptia-linux-training" rel="noopener noreferrer"&gt;Linux+ from CompTIA for self paced study&lt;/a&gt; .&lt;/p&gt;

&lt;p&gt;It is not necessary to get the certifications but one should master the objectives of these certifications to make sure one has the right foundation for tackling cybersecurity.&amp;nbsp; If you already have these skills, either from your previous job experience or tinkering with computers you may already have the foundation to start the process of becoming a cybersecurity specialist.&lt;/p&gt;

&lt;h2&gt;Foundational Skills &amp;amp; Knowledge for cyber security&lt;/h2&gt;

&lt;p&gt;The first cyber security-specific training course we suggest is &lt;a href="https://cybersecuritytraining.tech/w/comptia/security-training" rel="noopener noreferrer"&gt;CompTIA's Security+&lt;/a&gt;. This course is a basic cyber security course but will make sure your foundations are solid for undertaking more in-depth cyber security courses. Once this has been accomplished it's time to decide if one wants a more technical or managerial career path in cyber security. By this I mean does one want to manage the IT information security function, and ensure cyber security risk is addressed in a compliant way with appropriate organisation governance structures and procedures or does one want to be more technical and be a Red/Blue team member that undertakes penetration tests, test control compliance and configure secure infrastructure?&lt;/p&gt;

&lt;h2&gt;Tune in for more&lt;/h2&gt;

&lt;p&gt;Tune in to the second part of this series where we cover the managerial certification path and in part 3 we look at the technical certifications. We will update this page with links once they are published.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-03-04T04:08:00Z</dc:date>
  </entry>
  <entry>
    <title>Quiz Time - 5 Jan 2023</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6396304" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6396304</id>
    <updated>2023-01-06T09:46:58Z</updated>
    <published>2023-01-06T08:45:00Z</published>
    <summary type="html">&lt;p&gt;&lt;img data-fileentryid="6386711" src="https://blogs.jumpingbean.info/documents/portlet_file_entry/6386305/quizz1-java-increment.png/4c9794d4-0527-34bf-ff5b-115f4302d37d" /&gt;&lt;br /&gt;
 &lt;/p&gt;

&lt;p&gt;The possibly unexpected answer to the above is C. This is one of those trick questions they like to ask in certification exams or interviews. To understand why the answer is C and not A one needs to appreciate what the operators do.&amp;nbsp; This type of "trivia" is often glossed over when one is learning Java but having a solid understanding of what is going on turns one from a mediocre programmer to a great programmer. Our &lt;a href="https://java-training.net/java-core-to-advanced-courses#java-fundamentals-training" rel="noopener noreferrer" target="_blank"&gt;Java Essentials course&lt;/a&gt; makes sure we cover all these bases.&lt;/p&gt;

&lt;h3&gt;Post Increment/Decrement Operators Sequence&lt;/h3&gt;

&lt;p&gt;The post increment operator (x++) will return the value of the operand then increment it while the pre increment (++x)&amp;nbsp; will increase the value then return it.&lt;/p&gt;

&lt;p&gt;So by the time we get to "return x++", the original value of x = 10 would have been reduced to 9 by the "x--" statement. This statement would have reduced x to 9 and then returned 9 but it wasn't assigned to anything but x was updated. So when we get to "return x++" it first supplies the value of x which is 9 to the return statement and then increments x. Incrementing X no effect as function has returned and the x parameter is now out-of-scope.&lt;/p&gt;

&lt;h3&gt;Incomplete Explanation - Operator Precedence&lt;/h3&gt;

&lt;p&gt;​​​​​​​ Often people refer to operator precedence when dealing with this question but the post-increment/decrement and pre-increment/pre-decrement operators have a very high precedence and therefore this doesn't explain the observed behaviour.&amp;nbsp;&lt;/p&gt;

&lt;h4&gt;Java Operator Precedence Table&lt;/h4&gt;

&lt;p&gt;​​​​​​​&lt;/p&gt;

&lt;table border="1" cellpadding="5"&gt;
	&lt;tbody&gt;
		&lt;tr&gt;
			&lt;th&gt;Operators&lt;/th&gt;
			&lt;th&gt;Precedence&lt;/th&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;postfix&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;&lt;em&gt;expr&lt;/em&gt;++ &lt;em&gt;expr&lt;/em&gt;--&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;unary&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;++&lt;em&gt;expr&lt;/em&gt; --&lt;em&gt;expr&lt;/em&gt; +&lt;em&gt;expr&lt;/em&gt; -&lt;em&gt;expr&lt;/em&gt; ~ !&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;multiplicative&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;* / %&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;additive&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;+ -&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;shift&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;&amp;lt;&amp;lt; &amp;gt;&amp;gt; &amp;gt;&amp;gt;&amp;gt;&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;relational&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;&amp;lt; &amp;gt; &amp;lt;= &amp;gt;= instanceof&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;equality&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;== !=&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;bitwise AND&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;&amp;amp;&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;bitwise exclusive OR&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;^&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;bitwise inclusive OR&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;|&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;logical AND&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;logical OR&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;||&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;ternary&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;? :&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
		&lt;tr&gt;
			&lt;td headers="h1"&gt;assignment&lt;/td&gt;
			&lt;td headers="h2"&gt;&lt;code&gt;= += -= *= /= %= &amp;amp;= ^= |= &amp;lt;&amp;lt;= &amp;gt;&amp;gt;= &amp;gt;&amp;gt;&amp;gt;=&lt;/code&gt;&lt;/td&gt;
		&lt;/tr&gt;
	&lt;/tbody&gt;
&lt;/table&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-01-06T08:45:00Z</dc:date>
  </entry>
  <entry>
    <title>Pacman Proximity Alarm</title>
    <link rel="alternate" href="https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6394511" />
    <author>
      <name>Mark Clarke</name>
    </author>
    <id>https://blogs.jumpingbean.info/c/blogs/find_entry?p_l_id=294932&amp;entryId=6394511</id>
    <updated>2023-01-06T09:40:17Z</updated>
    <published>2023-01-05T18:51:00Z</published>
    <summary type="html">&lt;p&gt;Ever since Arduino first captured the imagination of the world I have harboured the desire to learn and master the art of hardware hacking. Actually, even before the Arduino, I wished to know more about the devices that I programmed, almost exclusively microprocessors, and to be able to create new solutions using hardware and not just software. My thought, at that stage, was that I would first need to learn electronics and electrical theory before being able to start building circuit boards.&amp;nbsp;&lt;/p&gt;

&lt;h3&gt;Arduino's Abstract Away the Engineering Behind Circuit Boards&lt;/h3&gt;

&lt;p&gt;This frame of mind made it a bit difficult to understand the Arduino when I first approached it since it requires only a basic understanding of electronics to start building new and exciting hardware solutions with it. Now I appreciate, as probably most others did right away, that the Arduino abstracts away the complexity of printed circuit boards and provides one with a Lego-like array of sensors and actuators to design and build your own hardware solutions.&lt;/p&gt;

&lt;p&gt;The key to conceptualizing and implementing your own creation is to know what type of sensors exists, what they can do and how they work, and then imagine new ways to combine them to create something that solves a problem for you.&lt;/p&gt;

&lt;p&gt;Perhaps someday I will still get down to designing my own printed circuit board but the Arduino is a great halfway point on that journey.&lt;/p&gt;

&lt;h2&gt;The Vision&lt;/h2&gt;

&lt;p&gt;For a while, we have had a Pac-man Ghost lamp at the office that was relocated from its position at reception when we set up our POS terminal and has since been lying around idle. It is powered by a USB connector and the thought crossed my mind it would be great to wire it up to an Arduino, throw in a distance sensor and have it light up when it detected someone moving around the reception area. Besides being cool, at least for me, it would also serve as a notification to everyone that there was some activity at reception.&lt;/p&gt;

&lt;p&gt;To fully appreciate the vision you need to know that we have a Pac-man-themed floor at the office.&lt;/p&gt;

&lt;p&gt;&lt;img data-fileentryid="6394534" src="https://blogs.jumpingbean.info/documents/portlet_file_entry/6386305/pacman-floor.jpg/971b0b03-bc69-6ade-b4cb-d674534b9296" /&gt;&lt;br /&gt;
 &lt;/p&gt;

&lt;h3&gt;Components Diagramme &amp;amp; Sketch Code&lt;/h3&gt;

&lt;p&gt;Below is a schematic diagram of the final design.&amp;nbsp; It's my first time using the fritzing layout software so please forgive the skew lines. I couldn't work out how to make the connectors align correctly.&lt;/p&gt;

&lt;p&gt;&lt;img data-fileentryid="6394545" src="https://blogs.jumpingbean.info/documents/portlet_file_entry/6386305/ghost_bb.png/31976267-2113-a9bd-f1b6-b11205ca703b" /&gt;&lt;br /&gt;
​​​​​​​​​​​​​​&lt;/p&gt;

&lt;p&gt;Below is the list of components used. These were selected because that's what I had on hand.&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://cyberconnect.shop/shop/product/uno-plus-improved-uno-arduino-compatible-520" target="_blank"&gt;Arduino UNO&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://cyberconnect.shop/shop/product/g153568993174-ultrasonic-hc-sr04-range-finding-sensor-12?category=5&amp;amp;order=name+desc#scrollTop=0" target="_blank"&gt;Ultra-Sonic SR04 Sensor​​​​​​​&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;Pezeiro buzzer&lt;/li&gt;
	&lt;li&gt;Pac-man Ghost lamp&lt;/li&gt;
	&lt;li&gt;Old Micro USB cable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is the sketch code for the Ghost Lamp Proximity Alarm&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#define trigPin 2&lt;br /&gt;
#define echoPin 3&lt;br /&gt;
#define lightPin 11&lt;br /&gt;
#define buzzer 4&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;int lastDistance=0;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void setup() {&lt;br /&gt;
&amp;nbsp; pinMode(lightPin, OUTPUT);&lt;br /&gt;
&amp;nbsp; pinMode(trigPin,OUTPUT);&lt;br /&gt;
&amp;nbsp; pinMode(buzzer,OUTPUT);&lt;br /&gt;
&amp;nbsp; pinMode(echoPin,INPUT);&lt;br /&gt;
&amp;nbsp; lastDistance=getDistance();&lt;br /&gt;
​​​​​​​&amp;nbsp; delay(50);&lt;br /&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void loop() {&lt;br /&gt;
&amp;nbsp; int currentDistance = getDistance();&lt;br /&gt;
&amp;nbsp; //want to try minimize false alarms so only sound alarm if distance changes by 2cm&lt;br /&gt;
&amp;nbsp; //&amp;nbsp; and only sound alarm if object is approaching not redecing &amp;nbsp;&lt;br /&gt;
&amp;nbsp; if (lastDistance-currentDistance&amp;gt;2){&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; soundAlarm();&lt;br /&gt;
&amp;nbsp; }else {&lt;br /&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp; shutoffAlarm();&lt;br /&gt;
&amp;nbsp; }&lt;br /&gt;
&amp;nbsp; lastDistance = currentDistance;&lt;br /&gt;
&amp;nbsp; delay(50);&lt;br /&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;int getDistance(){&lt;br /&gt;
&amp;nbsp; digitalWrite(trigPin, LOW);&lt;br /&gt;
&amp;nbsp; delayMicroseconds(5);&lt;br /&gt;
&amp;nbsp; digitalWrite(trigPin, HIGH);&lt;br /&gt;
&amp;nbsp; delayMicroseconds(10);&lt;br /&gt;
&amp;nbsp; digitalWrite(trigPin, LOW);&lt;br /&gt;
&amp;nbsp; int duration = pulseIn(echoPin, HIGH);&lt;br /&gt;
&amp;nbsp; return duration * 0.034 / 2;&lt;br /&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void soundAlarm(){&lt;br /&gt;
&amp;nbsp; tone(buzzer, 1000, 500);&lt;br /&gt;
&amp;nbsp; digitalWrite(lightPin, HIGH);&amp;nbsp; &amp;nbsp;&lt;br /&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;void shutoffAlarm(){&lt;br /&gt;
&amp;nbsp; digitalWrite(lightPin, LOW);&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;br /&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;Step-by-Step Guide&lt;/h2&gt;

&lt;p&gt;I approached this project just like one approaches a coding task. Identify the sub-components, get them working, then integrate them and test them along the way.&lt;/p&gt;

&lt;p&gt;The three main tasks consisted of:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;the Ghost Lamp&lt;/strong&gt;&amp;nbsp; - providing it with power and getting its LED to fire with code&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;the Proximity sensor&lt;/strong&gt; - wiring it up, getting the distance calculations correct and integrating it with the Ghost Lamp firing code&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;the buzzer&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;The Ghost Lamp - Wiring up the USB to the Ghost Lamp&lt;/h3&gt;

&lt;p&gt;I started with snipping off the USB A connector from the USB cable. This caused me some distress as I hate destroying something that is working fine and has a useful life ahead of it.&lt;img data-fileentryid="6394525" src="https://blogs.jumpingbean.info/documents/portlet_file_entry/6386305/usb-wires.jpg/fa247c19-f370-4183-aefc-c7b4d9be36f9" /&gt;&lt;br /&gt;
 &lt;/p&gt;

&lt;p&gt;I would have preferred to use one of those faulty USB cables that I have lying around somewhere but as Murphy knows, it's only locatable when you need a working one.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;A USB cable is simple, 4 wires, 2 for sending and receiving data and 2 for ground and power. I deduced that the Ghost was not using the data transmission wires. Once stripped the 4 wires were easy to identify by their colours. Red for the 5v wire and black for the ground. The hardest part was stripping the thin wires to expose the core. Took me a few goes but I eventually got it right.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;​​​​​​​​​​​​​​To connect the exposed wires to the &lt;a href="https://cyberconnect.shop/shop/product/uno-plus-improved-uno-arduino-compatible-520?search=PLus" target="_blank"&gt;Arduino UNO&lt;/a&gt; I soldered some male-to-male connectors I had lying about to the wires.&amp;nbsp; I attached the connectors to the UNO's ground and 5V pin for a quick test.&amp;nbsp; Success! The Ghost lamp lit up!&lt;/p&gt;

&lt;p&gt;Next was to write up a quick sketch to pulse the power on one of the digital pins, rewire the live wire to the selected pin, compile and test. Again success! This is essentially the same sketch one uses in the Hello World of Arduino programming but instead of a blinking LED, I had a blinking Ghost light.&lt;/p&gt;

&lt;h3&gt;The Ultrasonic SR04 Sensor - basic distance sensing algorithm&lt;/h3&gt;

&lt;p&gt;Next was the &lt;a href="https://cyberconnect.shop/shop/product/g153568993174-ultrasonic-hc-sr04-range-finding-sensor-12?category=5&amp;amp;order=name+desc#scrollTop=0" target="_blank"&gt;Ultrasonic SR04 sensor&lt;/a&gt;.&amp;nbsp; There are several types of distance sensors. The SR04 uses ultra-sonic sound waves bouncing off objects to detect their distance whilst other sensors use infrared like the &lt;a href="https://cyberconnect.shop/shop/product/ir-infrared-photoelectric-obstacle-avoidance-sensor-module-kit-331"&gt;infrared obstacle avoidance sensor.&lt;/a&gt; The IR distance sensors work with infrared and reflected light instead of sound waves.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Wiring up the Ultrasonic SR04 sensor wasn't too difficult. Like most components, it has a VCC&amp;nbsp; pin and ground pin. The magic happens with the trigger and echo pins. The trigger pin initiates an ultra-sonic burst and the echo sensors detect when the signal bounces back. Using the time elapsed between triggering the ultra-sonic burst and receiving its echo and the speed of sound one can calculate an approximate distance to the nearest object. For the technical details on the SR04 Ultrasonic sensor and the timings used in the code check out the &lt;a href="https://blogs.jumpingbean.info/documents/1094516/0/HCSR04.pdf/cdaa31d9-5e34-94f7-024f-6670810be6e2?t=1604314314870" target=""&gt;SR04 datasheet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In a complex set-up, like my workspace, this was problematic as the sound waves returning to the sensor may be bouncing off some static foreground object instead of a more distant, larger moving object I am interested in.&lt;/p&gt;

&lt;p&gt;Like if you are trying to detect someone moving closer to the sensor in a room, and its distance from the sensor, rather than just what is the nearest object in the room. At my workspace, the sensor would register a change in the distance occasionally even if no object was moving. This could be due to sound waves bouncing off objects and causing interference and arriving at odd times etc. But I am just speculating here as I am not a sound wave expert. Maybe it was detecting those evil IT gremlins that cause so many unexplained technical issues? I will test in a less complex space over the next couple of days and let you know.&lt;/p&gt;

&lt;h3&gt;The Pezeiro Buzzer&lt;/h3&gt;

&lt;p&gt;Finally, I added a Pezeiro buzzer. I hoped to be able to implement the waka-waka Pac-man sound effect but sadly my Google foo failed to find any existing code for this nor could I find any musical note notation for it. I could find the notes for the Pac-man theme tune but that is way too much typing to implement. I settled for simply sounding the buzzer when movements were detected for now. This was easy enough to integrate into the code.&lt;/p&gt;

&lt;h2&gt;Deployment&lt;/h2&gt;

&lt;p&gt;I deployed the set-up at the office. It works better than at my crowded workspace but still gives off some false alarms. After a while, the buzzer was disconnected for obvious reasons.&lt;/p&gt;

&lt;div class="embed-responsive embed-responsive-16by9" data-embed-id="https://www.youtube.com/embed/xppWXNo7eJM?rel=0" data-styles="{&amp;quot;width&amp;quot;:&amp;quot;100%&amp;quot;}" style="width:100%"&gt;&lt;iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/xppWXNo7eJM?rel=0" width="560"&gt;&lt;/iframe&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;</summary>
    <dc:creator>Mark Clarke</dc:creator>
    <dc:date>2023-01-05T18:51:00Z</dc:date>
  </entry>
</feed>
