Wednesday, October 19, 2016

Thoughts on the 6th Colombo IoT meetup

I suppose I have to leave this post here for the lack of a better place.

The meetup was titled Rise with Cloud Intelligence and staying with the theme the talks revolved around IoT PaaS offerings. The first couple of speakers(employed at Persistent Systems and Virtusa Polariz respectively) talked about their experiences working with IBM Bluemix, their segments leaned more towards demonstrations and explanations towards the work they had done with the platform. I felt considering the time constraints of the event it would have been more useful had some of these talks been done at a conceptual level because at times it seemed like time was wasted explaining simple nitty gritty. Rounding up, the segment gave a holistic image of the capabilities of IBM Bluemix.

The segment on Bluemix was followed with an interesting talk on an IoT enabled PaaS solution by two gentlemen with thick american accents. The first speaker(one Dean Hamilton) started off sharing some statistics on IoT space(published by an organization going by the name of ABI research). The space according to the speaker can be divided to 3 subcategories, the device manufacturers(hardware), connectivity(business with getting the sensor data to the servers) and the server side of things(control and analytics). Common sense and the statistics shown to us said most of the big IoT money(everyone is talking about) is expected to be made in software and service category.

Interestingly, the solution Dean presented to us was aimed at the first category. An IoT enabled PaaS that device manufacturers can use to monetize sensor data pushed back to them, to get a piece of the IoT pie. He took the example of how a manufacturer of agricultural machinery (harvesters and such) could enable the sensor data to be push on to a PPaaS, be enriched with other services(aligning with the manufacturer's interests) and to be sold to any interested party(such as fertilizer manufacturers). Their solution tries to cater to this need.

As an engineer with the aim of attending such events as a way to keep in beat with happenings of the local community I walked away satisfied.          

Saturday, July 2, 2016

How to find more needles

If you're someone like me who needs to find needles in log haystacks then bash and grep are surely your friends. Here are two tricks I've come to use to make my work easier.

Making errors scream!

Okay so you have four or five terminal tabs going tailing different but related logs and the logs keep on pilling up. A script like the one I got below should be useful (given that you have headphone on, unless you getting weird looks from people around you). 

The script tails a log periodically and checks for the existence of a word(or phrase). If a match is made it plays an audio file(wave) to get your attention.



Usage,

sh logchecker.sh <NO_OF_LINE_TO_TAIL> <LOG_FILE> <WORD_TO_LOOK_FOR> <WAV_FILE> 

Redirect errors to a file

You have a server with debug log level enable at the root. The logs are putting on MBs instead of KBs. Grep out what you need to a separate file to make analysis more manageable.



Usage,

sh grepout.sh <LOG_FILE> <NUM_LINES_BEFORE_MATCH> <NUM_LINE_AFTER_MATCH> <WORD_TO_MATCH> 

Thursday, May 26, 2016

How to hide the secure vault decryption password


whats covered: embedding the secure vault password for the linux startup script.

WSO2 servers ship with the capability of encrypting and securing plain text passwords used in configuration files, find more information about this feature here[1]. As the final step of this process the decryption password will need to be provided at server startup by entering it or placing it as a temp text file. When needing to start the servers as background services the password can be embedded into the server startup script as shown in the post  to make the process more secure.

1) encode the password to base64


run the command below to get it encoded

echo 'put your password here' | base64

2) modify the wso2server.sh script to generate the password file at runtime.


Include the following line at the start of the elif [ "$CMD" = "start" ]; then block. Refer[2]

echo <the encoded base64 string goes here> | base64 -d | tee $CARBON_HOME/password-tmp


the password-tmp file will get generated before the carbon bootlstrapper is run. The file will get deleted after it is read.


Note that this would only work on Linux distros where the base64 program is available.



Tuesday, March 15, 2016

How to front a bearer token secured endpoint using WSO2 API Manager


whats covered: fronting a bearer token secured endpoint using a mediation policy for APIM 1.10.0


If a requirement arises to front a bearer token secured API while maintaining API manager authentication mechanism(though this is unlikely and should probably be avoided) it could be met using a mediation policy.


1) Create a mediation policy with the logic


The mediation policy should be such that it takes in the bearer token(of the back-end service) passed in as a custom transport level header value and passes it on to the backend service with correct formating. This can be achieved using a property mediator[1], header mediator[2] and a few synapse built in functions.

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="bearersequence">
   <property xmlns:ns="http://org.apache.synapse/xsd" name="btoken" expression="$trp:token" scope="default" type="STRING"></property>
   <header xmlns:ns="http://org.apache.synapse/xsd" name="Authorization" scope="transport" expression="fn:concat('Bearer ', get-property('btoken'))"></header>
<header name="token" scope="transport" action="remove"></header>
</sequence>

download the example mediation policy from here[3]

2) Attach the mediation policy to the API In flow


Start creating an API with required HTTP methods etc, select Manage API from implementation and from Message Mediation Policies section upload the mediation policy to the in flow. Publish the API.




3) Invoke


Invoke the API with the bearer token of the backend service set to a header named "token" (as this is the header name that we have configured in the mediation policy).

 


Wednesday, February 17, 2016

How to create OData Services with WSO2 DSS

whats covered: creating and consuming an odata conforming service with WSO2 DSS 3.5.0.

Liberating the data layer with services is an important step in converting a monolithic system to one that conforms to Service Oriented Architecture(SOA). WSO2 Data Services Server[1] is a solution targeted at this very requirement. In the past exposing a data-source(such as a RDBMS) as a service involved configuring the data source and mapping service operations(SOAP) or resources to predefined SQL queries(time consuming). With DSS version 3.5.0 onwards you can use Odata to provide more flexibility to data access and reduce the need of specialized SQL queries/operations.

Open Data is a protocol that specifies how data can be exposed conforming to RESTful architecture and how it can be queried using HTTP calls.


The anatomy of an Odata service URL



extracted from the Odata Spec[2]

 

Creating The Service



1) Create a Carbon Data-source


 
Navigate to Configure > Datasources > New Datasource and fill in the RDBMS details appropriately to create a data-source[3]. 

 

2) Generate a service and enable Odata


Generate a service[4] using the data-source created in step 1 . Edit the generated service, enable Odata from the datasource configuration page and publish the service. The Entity Model will be generated and the odata service will be available at,


https://<host>:<https port>/odata/<service name>




find the sql scripts and data service file used in this example here[5]

Consuming the service


The service is in conformance with the ODATA Protocol(version 4). You can access entities [6] and use query options[7] to filter and query the data as needed according to the protocol spec.


Example Invocations using CURL[8]



Select all data items belonging to BOOK resource.


curl -X GET -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK' -k

Select a single data item belonging to BOOK resource by PK,


curl -X GET -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK(1)' -k

Remove a data item belonging to BOOK resource by PK,


curl -X DELETE -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK(1)' -k


Select all data items belonging to BOOK resource where TITLE equals Cook.


curl -X GET -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK?$filter=TITLE%20eq%20%27Cook%27' -k


Select all data items belonging to BOOK resource where TITLE contains Coo.


curl -X GET -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK?$filter=contains(TITLE%2C%27Coo%27)' -k

Select all data items belonging to BOOK resource where TITLE contains Coo, result limited to 9 and ordered by TITLE

curl -X GET -H 'Accept: application/json' 'https://localhost:9443/odata/RestaurantOfTheMindService/default/BOOK?$filter=contains(TITLE%2C%27Coo%27)&$orderby=TITLE&$top=9' -k


note that the HTTP requests must be appropriately URL encoded.



Tuesday, January 12, 2016

How to extend prototyping capabilities of WSO2 API Manager

whats covered: creating API prototypes using mediation policies for APIM 1.10.0

WSO2 API Manager comes with API prototyping capability OOTB. However if you are in need of advance prototyping capabilities or feel restricted by the available implementation for the time being you could tap into the underlying mediation engine (WSO2 ESB) to meet your prototyping need.


1) Create a mediation policy with the prototype logic


The mediation policy should be such that it respond back to the client with configured response rather than passing the message back to the backend. We can achieve this requirement using Respond[1] and Payload Factory mediators[2].


 <sequence xmlns="http://ws.apache.org/ns/synapse" name="prototypesequence">
   <header name="To" action="remove"></header>
   <header name="CustomHeader" scope="transport" value="test123"></header>
   <property name="RESPONSE" value="true"></property>
   <property name="NO_ENTITY_BODY" action="remove" scope="axis2"></property>
   <payloadFactory media-type="json">
      <format>               {"id":"101","name": "dumiduh","desc": "hard coded json"}            </format>
   </payloadFactory>
   <class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
   <respond></respond>
</sequence>


The example above is using a hard coded response body and headers, you could also populate the response with variables as required[3]. Find the example mediation policy here [4]

 

2) Attach mediation policy to API in flow


Start creating an API with required HTTP methods etc, select Manage API from implementation and from Message Mediation Policies section upload the prototype mediation policy. Publish the API.



3) Invoke

Invoke the API as you would any other managed API.


[1] - https://docs.wso2.com/display/ESB490/Respond+Mediator
[2] - https://docs.wso2.com/display/ESB490/PayloadFactory+Mediator
[3] - https://docs.wso2.com/display/ESB490/PayloadFactory+Mediator#PayloadFactoryMediator-Example3:Addingarguments
[4] -  https://drive.google.com/file/d/0B9oVIeyHJKBXb0xkTGUwSmlJc0E/view?usp=sharing

Saturday, January 2, 2016

Running a Jmeter script through Java

It may be useful to run a jmeter script through java and take actions depending on assertions in the script.

1) Creating the project


generate a java project using maven and add following dependencies,
  • ApacheJMeter_core
  • ApacheJMeter_http
  • jorphan
further dependencies[1] may need to be put if the jmeter script being run uses samplers other than http sampler or other comps such as pre-post processors.

 

2) Code


        ....
        ....   
        StandardJMeterEngine jmeter = new StandardJMeterEngine();
       
        //Initialize Properties, logging, locale, etc.
        JMeterUtils.loadJMeterProperties(JMETERPROPFILE);
        JMeterUtils.setJMeterHome(JMETERHOME);
        JMeterUtils.initLocale();
        SaveService.loadProperties();
               
        // Load existing .jmx Test Plan
        FileInputStream in = new FileInputStream(JMETERSCRIPT);
        HashTree testPlanTree = SaveService.loadTree(in);
        in.close();
       
        // Runing JMeter Test       
        Summariser summer = new Summariser();
        String testLog = JTLHOME+new Date().getTime()+".jtl";
        MyResultCollector resultCollector = new MyResultCollector(summer);
        resultCollector.setFilename(testLog);
        testPlanTree.add(testPlanTree.getArray()[0], resultCollector);      
        jmeter.configure(testPlanTree);
        jmeter.run();
        results = resultCollector.getResults();
        ....
        ....

ResultCollector class is extended so that actions can be taken based on the assertion result.

    ....
    ....   
    @Override
        public void sampleOccurred(SampleEvent e) {
        super.sampleOccurred(e);
        SampleResult r = e.getResult();
       
        ResultDTO result = new  ResultDTO();
        result.setSamplerName(r.getSampleLabel());
        result.setResponseCode(r.getResponseCode());
        result.setResult(r.isSuccessful());               
        results.add(result);
    }
    ....
    ....



Find the full example here,
https://github.com/handakumbura/jmeterrunner

What's in my Bag? EDC of a Tester