JavaPythonTech blog contains various tools and skills for a developer. Java/Python Best Practices, Tools, Examples, Programming interview questions.

Convert json to avro record Object in java

Introduction:


Avro is a popular data serialization system that offers efficient data exchange and storage capabilities. When working with Avro, you may encounter scenarios where you need to convert JSON data into Avro record objects. In this blog post, we will explore the process of converting JSON to Avro record objects, enabling you to seamlessly integrate JSON data into your Avro-based applications.

Step 1: Define the Avro Schema 
The first step is to define the Avro schema that corresponds to the structure of your JSON data. The Avro schema specifies the fields, data types, and hierarchy of the data. Ensure that the Avro schema accurately represents the JSON structure.

Step 2: Generate Code from the Avro Schema 
To convert JSON to Avro record objects, you'll need to generate code based on the Avro schema. There are various code generation tools available, depending on your programming language and development environment. For example, you can use the Avro Maven Plugin or the Avro Gradle Plugin if you are working with Java.

Invoke the code generation tool with the Avro schema as input, and it will generate classes or structures representing the Avro record objects based on the schema.

Step 3: Parse and Convert the JSON Data 
Next, you need to parse the JSON data and convert it into Avro record objects using the generated code. The specific code required for this step will depend on the programming language you are using.

In Java, for example, you can use the generated Avro record classes and the Jackson library to parse and convert JSON data. Here's an example of how to accomplish this:

import org.apache.avro.Schema;
import org.apache.avro.io.DatumReader;
import org.apache.avro.io.Decoder;
import org.apache.avro.io.DecoderFactory;
import org.apache.avro.specific.SpecificDatumReader;

import java.io.IOException;

public class JsonToAvroExample {

    public static void main(String[] args) {
        Example event = deserializeEvent(Example.class, "{}", Example.getClassSchema());
    }

    private static <T> T deserializeEvent(Class<T> clazz, String json, Schema schema)
            throws IOException {
        DatumReader<T> reader =
                new SpecificDatumReader<>(clazz);
        Decoder decoder =
                DecoderFactory.get().jsonDecoder(schema, json);
        return reader.read(null, decoder);
    }
}

Step 4: Testing the JSON to Avro Conversion 
To ensure that the conversion is working correctly, you should test the JSON to Avro conversion functionality. Prepare sample JSON data that matches the structure defined in the Avro schema. Use the deserializeEvent method, passing in the JSON data and the corresponding Avro schema. Validate that the resulting Avro record object contains the expected data.

Conclusion: 

Converting JSON to Avro record objects allows you to seamlessly incorporate JSON data into your Avro-based applications. By defining the Avro schema, generating code, and utilizing appropriate parsing and conversion techniques, you can efficiently transform JSON data into Avro record objects. Avro's flexibility and efficiency make it a powerful choice for data serialization. Integrate the steps outlined in this guide into your development workflow, and you'll be able to handle JSON to Avro conversions with ease.
Share:

How to make files generated from avro schema file to have String fields instead of Charset fields



Introduction:

Avro is a widely used data serialization system that allows for efficient data exchange between systems and programming languages. When working with Avro, you might encounter scenarios where you need to convert specific fields to string format in the files generated from Avro schemas. In this blog post, we will explore the process of making fields to string in Avro schema-generated files, enabling you to handle data transformations effectively.

Add the below plugin in the project's pom.xml : 

<build>
        <plugins>
             <plugin>
                <groupId>org.apache.avro</groupId>
                <artifactId>avro-maven-plugin</artifactId>
                <version>${avro.version}</version>
                <executions>
                    <execution>
                        <id>schemas</id>
                        <phase>generate-sources</phase>
                        <goals>
                            <goal>schema</goal>
                        </goals>
                        <configuration>
                            <stringType>String</stringType>
                            <imports>
                                <import>${project.basedir}/src/main/avro/example.avsc</import>
                            </imports>
                            <sourceDirectory>src/main/avro</sourceDirectory>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
</build>

Notice the configuration <stringType> in bold.

The above configuration will make the generated java files to have fields as String instead of charset.

Hope this helps! Happy coding!
Share:

How to switch branch in IntelliJ Idea


Introduction:

IntelliJ IDEA is a powerful integrated development environment (IDE) widely used by developers to write code and manage version control systems. One of the fundamental tasks in software development is switching branches, allowing developers to work on different features or bug fixes concurrently. In this blog post, we will walk you through the process of switching branches in IntelliJ IDEA, enabling you to effortlessly navigate between different branches of your project.

Step 1: Opening the Version Control Tool Window 
To switch branches, first, make sure you have your project open in IntelliJ IDEA. Then, navigate to the bottom of the IDE window, where you will find a toolbar with several buttons. Click on the "Version Control" button to open the Version Control tool window.




Step 2: Selecting the Git Branches Dropdown 
Once the Version Control tool window is open, you will see a section titled "Local Changes." In this section, there is a dropdown labeled "Git." Click on the dropdown arrow to reveal a list of options.

Step 3: Choosing the Branch to Switch 
In the Git dropdown, you will find a list of branches available in your project. Select the branch you want to switch to by clicking on it. IntelliJ IDEA will automatically update the project to reflect the selected branch.

Step 4: Pulling the Latest Changes (Optional) 
If you are switching to a branch that has remote changes, it's recommended to pull the latest changes to ensure you have the most up-to-date code. To do this, right-click on the branch name in the Git dropdown and select "Git Pull" from the context menu. IntelliJ IDEA will fetch and merge the latest changes into your local branch.

Step 5: Verifying the Branch Switch 
After selecting the branch and pulling the latest changes (if necessary), you can verify that the branch switch was successful. Open your project's file tree, and you should see the files and folders associated with the newly selected branch. Additionally, any modifications you make to the code will be saved in the context of the current branch.

Conclusion: 

Switching branches in IntelliJ IDEA is a straightforward process that allows developers to seamlessly transition between different features or bug fixes. By following the steps outlined in this blog post, you can efficiently navigate your project and work on multiple branches simultaneously. IntelliJ IDEA's robust version control integration simplifies the branch switching process, ensuring a smooth and productive development experience.
Share:

Spring Boot liquibase : How to exclude SQL statement execution in a particular environment/spring profile

Introduction:

When you are using liquibase with Spring boot for database migration, you might often end up in a situation where you want to execute SQL statements only in particular environment for various reasons like local/dev data setup, Integration tests etc. You would not want these test data setup statements executed in the actual environment like QA/Production. Lets see how we can make a particular SQL statement/changeset in liquibase terms execute in a particular environment.

Scenario :

Consider that we have a Employee table in our db.tables.changelog.sql file as shown below.

--liquibase formatted sql

-- changeset demo:20230605-01
-- preconditions onFail:MARK_RAN onError:MARK_RAN
-- precondition-sql-check expectedResult:0 select count(*) from information_schema.tables where lower(table_name) = 'employee';
CREATE TABLE Employee
(
    id integer,
    firstName varchar,
    lastName varchar,
    email varchar,
    addressLine varchar,
    city varchar
);
-- rollback drop table employee;


And, we have the below data setup file demo.local.datainserts.changelog.sql 

--liquibase formatted sql

-- changeset demo:20230605-01 context:local
-- preconditions onFail:MARK_RAN onError:MARK_RAN
-- precondition-sql-check expectedResult:0 select count(*) from employee;
insert into employee (id, firstName, lastName, email, addressLine, city) values (1, 'John', 'Wick', 'johnwick@testmail.com', 'addressLine', 'NewYork');

Notice that we have specified the changeset to be executed only when the context is "local".

Configuration: To inform Spring about the context to use for a specific profile, navigate to your application-local.yaml (assuming the profile is "local") and add the following configuration:


spring:
  liquibase:
    contexts: local

Alternatively, you can modify the application.yaml file to pass the spring.profiles.active value to the Liquibase context:

spring:
  liquibase:
    contexts: ${spring.profiles.active}

That's it! The SQL statement or changeset will be executed only when the context is "local" and will be skipped in any other context or environment.

Conclusion:

By leveraging the power of Liquibase contexts and Spring Boot configuration, you can control the execution of SQL statements or changesets in specific environments. This flexibility allows you to manage test data setup or environment-specific scripts effectively while ensuring they are not executed in production environments. Liquibase simplifies the database migration process, and with the context feature, you can further enhance its functionality.

Happy coding!

More information about liquibase contexts can be found in the official documentation here
Share:

Generate Oracle insert statements for excel data using java

Many times, it will be easier to generate insert statements using a program rather than creating it manually which will be time consuming especially if you have to create hundreds/thousands of them.

There are many ways to achieve the goal, the easiest being using formulas in excel and generating the insert statements.

It will become more complex to use excel to generate insert statements if the data is in columns rather than rows.

It will be a lot easier to use excel to do this job if the data is in a single row, i.e table columns can be mapped to excel columns.

The below program can be used if the data in excel as given below:



Example : Insert statements should have Row1-Cell1 i.e. 123456 common in 3 insert statements and each should have the values in the columns.
123456,ABC
123456,DEF
123456,GHI
456789,ABC
456789,DEF
456789,GHI
456789,JKL
456789,MNO
.......

Program:

import org.apache.poi.ss.usermodel.Cell;
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Sheet;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;

import java.io.*;
import java.util.*;

public class ReadExcelAndGenerateInserts {

    public static void main(String[] args) {
        try {
            Map> map = readExcelFile();
            List insertStatements = createInsertStatements(map);
            createTxtFile(insertStatements);
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    private static void createTxtFile(List insertStatements) throws IOException {
        FileWriter fileWriter = new FileWriter("inserts.txt");
        for (int i = 0; i < insertStatements.size(); i++) {
            fileWriter.append(insertStatements.get(i));
            fileWriter.append("\n");
        }
        fileWriter.flush();
        fileWriter.close();
        System.out.println("File created!!");
    }

    private static List createInsertStatements(Map> map) {
        List inserts =  new ArrayList<>();
        Set>> entries = map.entrySet();
            Iterator>> iterator = entries.iterator();
            while(iterator.hasNext()){
                Map.Entry> next = iterator.next();
                int cell1 = next.getKey();
                List value = next.getValue();
                for (int j = 0; j < value.size(); j++) {
                    StringBuilder sb = new StringBuilder();
                    sb.append("INSERT INTO TABLE (COLUMN1,COLUMN2,COLUMN3,COLUMN4,COLUMN5) VALUES (SEQ.Nextval ,");
                    sb.append(cell1).append(",'123','").append(value.get(j)).append("', NULL);");
                    inserts.add(sb.toString());
                }
            }
        System.out.println(inserts);
        return inserts;
    }

    private static Map> readExcelFile() throws IOException {
        Map> map = new HashMap<>();
        File file = new File("Input.xlsx");
        FileInputStream fis = new FileInputStream(file);
        Workbook wb = new XSSFWorkbook(fis);
        Sheet sheet = wb.getSheetAt(0);
        Row row = sheet.getRow(0);
        int noOfCols = row.getLastCellNum();
        System.out.println("No. of columns is : "+noOfCols);
        int key = 0;
        for(int i=0;i            List list = new ArrayList<>();
            for(int rowNumber = 0; rowNumber < sheet.getLastRowNum(); rowNumber++) {
                Row rowNum = sheet.getRow(rowNumber);
                Cell cell = rowNum.getCell(i);
                if(null != cell){
                    if(cell.getCellType() == Cell.CELL_TYPE_NUMERIC)
                    {
                        if(rowNumber==0){
                            key = (int) cell.getNumericCellValue();
                        }
                        System.out.println(cell.getNumericCellValue());
                    }
                    else if(cell.getCellType() == Cell.CELL_TYPE_STRING)
                    {
                        list.add(cell.getStringCellValue());
                        System.out.println(cell.getStringCellValue());
                    }
                }
            }
            map.put(key,list);
        }
        System.out.println(map);
        return map;
    }
}






















Share:

Announcements

Will be posting twice a week on latest java libraries/frameworks which a developer needs to know in 2019.
Also will add the common errors and resolutions for the same.

Please feel free to comment if you need anything specific.

Recent Posts

Popular Posts

Search This Blog

Blog Archive

Powered by Blogger.

Contributors

Pages