Interactions with hrorm have two major points: building Dao
objects and then using them.
Dao
building is accomplished with the aptly named DaoBuilder
class.
DaoBuilder
objects are part of the one-time (static, singleton) initiation of
your application. There is little point in having more than one builder of any entity type.
Of course, some care must be taken: by their nature, DaoBuilder
objects are
mutable, so if you directly expose them to the rest of your application during start up,
it's possible that you can do something stupid.
Dao
objects themselves are what perform the actual tasks of persisting and
instantiating entity objects. To make a Dao
requires a Connection
object. Since a Dao
keeps a stateful Connection
to the underlying
data store, it is dangerous to share instances across threads. Generally, the idea is to
instantiate a Dao
when you need it, and then allow it to be garbage collected.
It is up the application itself to deal with reaping the Connection
, with some exceptions
noted below.
Also take a look at the Quick Start and Javadocs.
Before diving into the nuts-and-bolts of how to use hrorm, it is helpful to understand the ideas it implements.
Using hrorm means accepting some restrictions on how your entities (both the Java classes and the SQL schema) are designed. Some of these restrictions are good practices regardless of how database and object models are built.
Long
identifier field that will be used as a
primary key.
Hrorm expects that this is a nullable field and that null means unpersisted.
Hrorm will populate this field from a sequence (see restrictions on
schema below), unless you use an immutable model.
The primary key will be used it when hrorm performs updates. (There is limited support for
keyless entities.)
List
types exclusively. No sets
or arrays or other collections.
At the moment Hrorm supports a limited number of (Java) datatypes:
Long
, BigDecimal
, String
, Boolean
,
and Instant
. Additionally, hrorm supports a mechanism
for persisting values that can be converted to and from String
,
which is intended primarily for enumerated types. Adding supports for new datatypes
that are supported by the ResultSet
interface should not be particularly
difficult if you're a hacker.
One point of using a relational database as opposed to a document store or other mechanism is to preserve the structure of relations between entities. Hrorm supports two kinds of relationships: a parent-child relation where one object contains a list of children, and a sibling relationship, where one object expresses a connection with another entity.
These relations are defined by using the DaoBuilder.withParent()
and
DaoBuilder.withChildren()
methods.
In a parent-child relationship, the child is assumed to be completely dependent on the parent, so that its very existence depends on the existence of the parent.
Take a look at the recipe example.
If a recipe is deleted, it makes no sense to preserve the
ingredient rows. So, if a call is made on the recipe Dao.delete()
method, all
the ingredients will be deleted too. Likewise, on an update hrorm will make the necessary inserts,
updates, and deletes to the ingredients table to synchronize the object state.
In fact, you would rarely want to instantiate an ingredients Dao
directly.
Hrorm will do the work for you.
One tricky thing about these relationships is the reversal in how ownership is expressed between
the database schema and the object model. In the object model, the Hand object has Finger objects.
In the database, the FINGER table has foreign key references to the HAND table. With hrorm, the
DaoBuilder
object of both the parent and child need to understand
the relationship, not just one or the other.
These relations are defined by using the DaoBuilder.withJoinColumn()
method.
These are relations between two objects where one object refers in a dependent, but not controlling, way.
In the recipe example, the relationship between an Author and a Recipe is of this type. A Recipe requires an Author, but neither owns the other.
Hrorm requires that sibling objects be persisted first, and will not handle transitive persistence automatically. Likewise, deleting a Recipe will not cause a cascading delete of an Author record.
Note well: Hrorm will do nothing to prevent a dependent sibling from being deleted. The application code, or database schema constraints, (or both!) must be in place to prevent orphaned records of that type.
Hrorm supports many-to-many relations through AssociationDao
objects.
An AssociationDao
is an object that allows the caller
to create or delete links between entities. Then the associates of any individual
entity can be selected, in either direction.
As an example, think of movies and actors. Actors appear in multiple movies, and
movies include multiple actors. Using Hrorm we would create independent classes modeling
both actors and movies, and then create an AssociationDao
allowing
us to load either all the actors in any movie, or all the movies that an actor
appeared in.
AssociationDao
objects do not implement the full Dao
interface. In fact, the interface is completely different. They have one purpose
only: for managing the links between entities. They must be backed by a table of
three columns: a primary key, and foreign keys to the two entities being linked.
For the most part, hrorm tries to stay out of the transaction handling business. Applications know what changes must be transactions, hrorm does not. However, hrorm's commands to insert, update, and delete records can lead to multiple SQL statements being run, due to the handling of parent-child relationships. This means in the case of an error, the database is at risk of coming to an illegal state.
If nested transactions were supported natively by every database provider, it would probably be correct to wrap database mutations in an internally nested transaction and commit or rollback on completion. This is not possible for every provider, and mechanisms for how to accomplish this are not identical even for databases that do provide for transaction nesting.
Hrorm does give a minimal amount of support to attempt to alleviate these issues. Hrorm provides
a Transactor
class
with a couple of methods to elminate the boilerplate try ... catch ... finally
blocks necessary for doing transactions. In keeping with the hrorm ethos, these methods
do not declare any checked exceptions. Keep in mind that a Transactor
will
automatically close its connection whether or not it completes with a commit or a rollback.
In addition, the Dao
interface provides cognate methods for insert
,
update
, and delete
named atomicInsert
,
atomicUpdate
, and atomicDelete
. These methods provide
a no-fuss way to do mutations of parent-child relations. However, these methods
must be used with care! In addition to a possibly unexpected early commit if these methods
are accidentally used in a larger transaction, remember that, as above, these methods
will also close the connection they use when complete.
There is no one-size-fits-all solution to how to marry the problem of database to object mapping with the problems of transactions and atomicity. For this reason, hrorm mostly just tries to stay out of the way.
Hrorm provides two kinds of builders: one for Javabean style entities (DaoBuilder
) and
one for immutable object models with separate builder objects (IndirectDaoBuilder
).
Both support very similar methods, and though most of the examples below show a plain
DaoBuilder
building for an immutable model is almost identical.
The easiest case for any ORM tool is persisting a single object backed by a single table. Let's work on persisting a model for a person that includes the following elements:
To model the person entity, we write a Java Person object.
class Person { Long id; String name; long weight; BigDecimal height; Instant birthday; Boolean isHighSchoolGraduate; HairColor hairColor; }
A few small points.
HairColor.Black
, HairColor.Brown
,
etc.
In the database we will create two structures for persisting this class: a table to store the data and a sequence to issue the IDs.
CREATE SEQUENCE PERSON_SEQUENCE; CREATE TABLE PERSON_TABLE ( ID INTEGER PRIMARY KEY, NAME TEXT, WEIGHT INTEGER, HEIGHT DECIMAL, BIRTHDAY TIMESTAMP, IS_HIGH_SCHOOL_GRADUATE BOOLEAN, HAIR_COLOR TEXT );
Note the somewhat different types than in the Java code.
To translate between the database reprentation and the Java representation, we plan to use
a Dao
object. We could build that directly, but hrorm provides a DaoBuilder
class that makes things much easier. In hrorm, both Dao
objects and their builders
are parameterized on the type of thing they persist. We start off by simply calling the
DaoBuilder
constructor.
DaoBuilder<Person> daoBuilder = new DaoBuilder<>("PERSON_TABLE", Person::new);
The constructor takes two arguments: the name of the table and a no-argument method for creating a new instance of the parameterized type.
Next, we need to define the primary key for this entity.
daoBuilder.withPrimaryKey("ID","PERSON_SEQUENCE", Person::getId, Person::setId);
The primary key is defined with four elements:
Long
) from the
Person
object.
With that covered, we can being to teach the DaoBuilder
about the individual data
elements. First, we will teach it about the name field.
daoBuilder.withStringColumn("NAME", Person::getName, Person::setName);
This explains that that table has a column named "NAME" and that the value in the table can
be populated from calling getName()
on a Person
, and that value
can be set by calling setName()
. There are other methods on the DaoBuilder
for other Java types.
For the integer weight value (which should actually be a Long
or long
,
not an int
or short
).
daoBuilder.withIntegerColumn("WEIGHT", Person::getWeight, Person::setWeight);
For fractional, decimal, or floating point values, hrorm supports the
java.math.BigDecimal
type.
daoBuilder.withBigDecimalColumn("HEIGHT", Person::getHeight, Person::setHeight);
For dates and times, hrorm supports the java.time.Instant
type.
daoBuilder.withInstantColumn("BIRTHDAY", Person::getBirthday, Person::setBirthday);
And similarly for boolean values. Not all databases support a Boolean type. For those, you should use the Converter apparatus, as with an enumerated type.
daoBuilder.withBooleanColumn("IS_HIGH_SCHOOL_GRADUATE", Person::isHighSchoolGraduate, Person::setHighSchoolGraduate);
For the enumerated HairColor
type, hrorm needs a bit more help, via an implementation
of its Converter
interface. We need a simple class that looks like this:
class HairColorConverter implements Converter<HairColor, String> { @Override public String from(HairColor item) { return item.getColorName(); } @Override public HairColor to(String s) { return HairColor.forColorName(s); } }
Once the Converter
exists, we can teach the DaoBuilder
about it
and the hair color field.
daoBuilder.withConvertingStringColumn("HAIR_COLOR", Person::getHairColor, Person::setHairColor, new HairColorConverter());
Notice that in addition to the usual fields for column name, getter, and setter, we additionally must specify the conversion mechanism.
That completes the DaoBuilder
. Now we can actually build a
Dao<Person>
object, assuming we have a java.sql.Connection
.
But before that, we should note that the DaoBuilder
supports a fluent interface,
so we could write all of the above as:
DaoBuilder<Person> daoBuilder = new DaoBuilder<>("PERSON_TABLE", Person::new) .withPrimaryKey("ID","PERSON_SEQUENCE", Person::getId, Person::setId) .withStringColumn("NAME", Person::getName, Person::setName) .withIntegerColumn("WEIGHT", Person::getWeight, Person::setWeight) .withBigDecimalColumn("HEIGHT", Person::getHeight, Person::setHeight) .withInstantColumn("BIRTHDAY", Person::getBirthday, Person::setBirthday) .withBooleanColumn("IS_HIGH_SCHOOL_GRADUATE", Person::isHighSchoolGraduate, Person::setHighSchoolGraduate) .withConvertingStringColumn("HAIR_COLOR", Person::getHairColor, Person::setHairColor, new HairColorConverter());
In just 8 lines of code, we have taught hrorm everything it needs to know to CRUD Person
objects.
When one entity contains a collection of other entities, hrorm calls that a parent child relation.
Here is a simple model for tracking inventories of stocks of things through time. At each instant that we measure, we want to know what quantity of each product we have.
public class Inventory { Long id; Instant date; List<Stock> stocks; } public class Stock { Long id; String productName; BigDecimal amount; }
The Inventory
class represents a snapshot in time of what was available in inventory,
modeled as a List
of Stock
items, each of which contains a product name
and a decimal quantity of how much of that thing is available. Notice that the Stock
model
includes a reference to the inventory ID, but not the inventory object itself.
To model this in the database, we make each item in the STOCK
table point back to
an INVENTORY
record, as follows.
CREATE TABLE INVENTORY ( ID INTEGER PRIMARY KEY, DATE TIMESTAMP ); CREATE TABLE STOCK ( ID INTEGER PRIMARY KEY, INVENTORY_ID INTEGER, PRODUCT_NAME TEXT, AMOUNT DECIMAL ); CREATE SEQUENCE INVENTORY_SEQUENCE; CREATE SEQUENCE STOCK_SEQUENCE;
To model this in hrorm, we need to teach it about the parent-child relationship between the two
entities
using the DaoBuilder.withParentColumn()
and DaoBuilder.withChildren()
methods.
First we make a Dao for the Stock
entity.
DaoBuilder<Stock> stockDaoBuilder = new DaoBuilder<>("STOCK", Stock::new) .withPrimaryKey("ID","STOCK_SEQUENCE", Stock::getId, Stock::setId) .withParentColumn("INVENTORY_ID") .withStringColumn("PRODUCT_NAME", Stock::getProductName, Stock::setProductName) .withBigDecimalColumn("AMOUNT", Stock::getAmount, Stock::setAmount);
The column INVENTORY_ID
is marked not as an integer column,
but with the special withParentColumn
method.
An entity can have only one parent. In the Inventory
DaoBuilder
we use the withChildren
method
to complete the relationship definition..
DaoBuilder<Inventory> inventoryDaoBuilder = new DaoBuilder<>("INVENTORY", Inventory::new) .withPrimaryKey("ID", "INVENTORY_SEQUENCE", Inventory::getId, Inventory::setId) .withInstantColumn("DATE", Inventory::getDate, Inventory::setDate) .withChildren(Inventory::getStocks, Inventory::setStocks, stockDaoBuilder);
When we create a Dao
in this fashion we create a category of entity, the child,
that is wholly dependent upon another, the parent. Whenever we insert, update, delete, or
select the parent entity, the changes we make flow through the children and transitively
to their children.
Be careful, if you do not want the children to be deleted, this is not the relationship
you want to build. In particular, remember that issuing an update
will result not just in a SQL UPDATE
in the database, but possibly
a whole series of INSERT
, UPDATE
, and DELETE
queries being run.
Hrorm always understands child objects to be members of type List
.
No other collection type is supported.
If your object model for includes a back-reference from the child to the parent,
Hrorm will populate it for you. If in the model above, the Stock
class had a reference to its parent Inventory
we could use an
overloaded withParentColumn()
method call on its DaoBuilder
as follows:
.withParentColumn("INVENTORY_ID", Stock::getInventory, Stock::setInventory)
That will cause the reference to the parent object to be automatically
set when using any of the Dao
select
methods.
Hrorm supports many-to-many relations through AssociationDao
and AssociationDaoBuilder
objects. The object model should be
of two independent entities: neither should contain a reference to the other.
As an example, consider the following model of actors and movies.
public class Actor { Long id; String name; } public class Movie { Long id; String title; }
It does not make sense to make either the parent object or to have a single
reference between the two classes in either direction. But we do want to create links
between any pair of actors and movies. (For the following, we will assume the
existence of database structures and DaoBuilder
objects for both
entities.)
The backing table for storing associations should look like this:
create table actor_movie_associations ( id integer primary key, movie_id integer, actor_id integer );
And there should be a sequence to populate the association IDs.
To create an AssociationDao
, we specify the DaoBuilder
(or Dao
) objects of the two entities and a few details about the
association table. It looks like this:
DaoBuilder<Actor> actorDaoBuilder = // reference or creation DaoBuilder<Movie> movieDaoBuilder = // reference or creation AssociationDaoBuilder<Actor, Movie> actorMovieAssociationDaoBuilder = new AssociationDaoBuilder<>(actorDaoBuilder, movieDaoBuilder) .withTableName("actor_movie_associations") .withSequenceName("actor_movie_association_sequence") .withPrimaryKeyName("id") .withLeftColumnName("actor_id") .withRightColumnName("movie_id");
All of the elements shown above must be set to make a valid AssociationDaoBuilder
.
To create a AssociationDao
from the builder just pass it a Connection
.
There is also support for immutable models, via the IndirectAssociationDao
.
When one entity object contains a reference to another entity object, hrorm calls that a sibling or join relationship.
Consider a model of cities and states, where each city contains a reference to a state.
class State { Long id; String name; } class City { Long id; String name; State state; }
This could be backed by this schema.
CREATE TABLE STATE ( ID INTEGER PRIMARY KEY, NAME TEXT, ); CREATE TABLE CITY ( ID INTEGER PRIMARY KEY, NAME TEXT, STATE_ID INTEGER ); CREATE SEQUENCE STATE_SEQUENCE; CREATE SEQUENCE CITY_SEQUENCE;
Creating the State
DaoBuilder
is trivial.
DaoBuilder<State> stateDaoBuilder = new DaoBuilder<>("STATE", State::new) .withPrimaryKey("ID", "STATE_SEQUENCE", State::getId, State::setId) .withStringColumn("NAME", State::getName, State::setName);
There is one new trick to creating the City
DaoBuilder
: using the
DaoBuilder.joinColumn()
method which will refer to the stateDaoBuilder
we just defined.
DaoBuilder<City> cityDaoBuilder = new DaoBuilder<>("CITY", City::new) .withPrimaryKey("ID", "CITY_SEQUENCE", City::getId, City::setId) .withStringColumn("NAME", City::getName, City::setName) .withJoinColumn("STATE_ID", City::getState, City::setState, stateDaoBuilder);
The withJoinColumn
method accepts an extra parameter: a DaoDescriptor
.
Both DaoBuilder
and the Dao
class implement this interface. Generally,
it's much more convenient to create all the builder objects together.
Sibling or join relationships in hrorm are one-way. One object declares that it has a reference to another. Trying to make a circular relationship will lead to errors.
When hrorm instantiates objects like City
from the database, it automatically
instantiates the appropriate sibling State
objects and sets the field in the
City
object.
Of course, you could just treat these as two one-table Dao
objects, and then
right some code to glue things together. In addition to being inconvenient, this will likely
have poorer performance, since hrorm will issue a SQL left join to load the City
and State
objects with one query.
Objects can have several join columns, and those objects can have their own join columns.
Hrorm will attempt to transitively load the entire object graph when a select()
method
is called
on the Dao
. There is a limit to how many joins hrorm can perform. Additionally, there
is a limit to how many joins a database engine will allow. Consider this when designing
Dao
objects.
Also remember, sibling relationships are for reading and populating objects,
not for making saves or updates. If a sibling object is mutated, it must be saved itself.
If you prefer that your Java entity model be made up of immutable classes, hrorm can support that.
Hrorm works well with immutable objects that have distinct builder classes for managing their
setters.
To allow this, hrorm provides an IndirectDaoBuilder
class. The indirect moniker is intended to suggest that the entities will not be directly
constructed,
but that will be handled by the builder objects.
The following example uses lombok style builders, but you can roll your own if that's what you prefer.
@lombok.Builder @lombok.Data public class ImmutableThing { private final Long id; private final String word; private final BigDecimal amount; } IndirectDaoBuilder<ImmutableThing, ImmutableThing.ImmutableThingBuilder> immutableThingDaoBuilder = new IndirectDaoBuilder<>("immutable_thing", ImmutableThing::builder, ImmutableThing.ImmutableThingBuilder::build) .withPrimaryKey("id", "immutable_thing_seq", ImmutableThing::getId, ImmutableThing.ImmutableThingBuilder::id) .withBigDecimalColumn("amount", ImmutableThing::getAmount, ImmutableThing.ImmutableThingBuilder::amount) .withStringColumn("word", ImmutableThing::getWord, ImmutableThing.ImmutableThingBuilder::word); Connection connection = // comes from somewhere // this retured object implements the identical interface as any other hrorm Dao Dao<ImmutableThing> immutableThingDao = immutableThingDaoBuilder.buildDao(connection);
It works very similarly to the regular DaoBuilder
, but some extra details
are required. There are now two type parameters: one for the entity itself and
one for its builder object. On construction, instead of simply showing how to create
a new entity instance, two parameters show how to make a new builder instance and
how to make a new entity instance from the builder. Finally, all the setters are
specified on the builder class, not the entity instance.
With a regular Dao
the object's primary key will be set on
the object during the insert()
. This is not true for Dao
objects created from an IndirectDaoBuilder
. The insert
method will still return the newly issued ID.
Indirect Dao objects support all the mechanisms for child and sibling records that regular Dao objects do. Due to the lack of population of IDs, some care must be taken. You cannot simply insert a sibling object and then immediately place it into a new entity instance, since it will not yet have its ID set.
Hrorm is built to support tables that have sequence valued primary keys. This is generally a good way to design a schema, but it's not always optimal. For instance, if you are storing an event stream in a database, assigning keys might just be a waste of time and space.
Hrorm does have some mechanisms for supporting entities that do not have primary
keys. But, you cannot create a full-featured Dao
for an entity without
a primary key. In these cases, you can create a KeylessDao
by using the
IndirectKeylessDaoBuilder
.
A KeylessDao
does not provide all the functionality that a Dao
does.
Another drawback to keyless entities is that they cannot be used as child or sibling entities. Hrorm manages all relationships via keys.
Hrorm provides little support for placing constraints on your object models or schema. The correct place for that is within your object model or schema or, best of all, both.
The DaoBuilder
does allow you to mark particular columns as not null,
by calling the notNull
method. It will apply that constraint to the
last column added to the growing definition.
Where a not null condition is defined in hrorm, it will prevent the creation of null entries in the particular field in question, either through inserts or updates. However, hrorm will ignore the constraint when attempting load records from the database, on the theory that loading questionable data is better than not loading it.
Since a properly designed object model and database schema can easily replicate this functionality and provide other much more significant invariant enforcement, it must be admitted that this facility is of little practical use.
One case where this can be helpful is during periods of data migration. If some element is moving from optional to required, Hrorm can at least help creating new records without the newly required element, until such time as more robust enforcement of constraints can be implemented by the application and database.
Hrorm provides a Validator
to
help in sure the database schema is in sync with the code.
The validation provided is not a substitute for testing that code works as intended. It simply
checks that the names of the tables, columns, and sequences provided in the Dao
descriptions exist as stated. Columns are checked to make sure they are
of a correct type. As such, it can quickly find typos or other simple errors and
report them. This can be particularly useful in times of database refactoring.
To check that a particular Dao
is correct, simply pass it, or its builder,
to the Validator::validate
method with a live Connection
.
If the validation fails, an exception will be raised whose message describes the problems
found.
DaoBuilder<Entity> daoBuilder = new DaoBuilder<>("TABLE", Entity::new) .withPrimaryKey("id", "SEQUENCE", Entity::getId, Entity::setId) .withStringColumn("STRING_COL", Entity::getStringThing, Entity::setStringThing) .withIntegerColumn("INT_COL", Entity::getIntegerThing, Entity::setIntegerThing); Connection connection = // create connection just as for your application try { Validator.validate(connection, daoBuilder); } catch (HrormException ex){ System.out.println(ex.getMessage); }
The testing that this performs is light. It will not attempt to make any changes to the
state of the database. It will merely check to make sure the structures exist as expected.
Moreover, each Dao
is tested individually. Related Dao
objects, even dependent child objects, are not checked by the Validator
.
To create a Dao
from a DaoBuilder
, just pass it a
java.sql.Connection
:
// Assume the existence of some ConnectionPool Connection connection = ConnectionPool.connect(); Dao<Person> dao = daoBuilder.buildDao(connection);
To create a new record in the database, we create a new instance of the class and pass it
to Dao.insert()
.
Person person = new Person(); // set values for the fields we want person.setName("Thomas Bartholomew Atkinson Wilberforce"); person.setHighSchoolGraduate(true); person.setWeight(100L); long id = dao.insert(person); connection.commit();
After that code runs, the record will be stored in the database. Hrorm will have pulled
a new sequence value and set it on the object. The following assertions will be true. Note that for
immutable objects whose Dao
implementations were created using an IndirectDaoBuilder
the ID will not be set.
Assert.assertNotNull(person.getId()); Assert.assertTrue(id == person.getId());
Hrorm will automatically insert child records of the instance being saved, if any.
If the record has sibling entities, references to those will be persisted. But be careful, those sibling references must be persisted first. Sibling inserts and updates do not cascade.
Hrorm provides a few methods for reading data out of the database and instantiating entity objects.
All of the selection mechanisms below will fully read and populate the entire relevant object graph including all children and siblings and all their transitive references.
You can read an item from the database if you know its primary key.
Person person = dao.select(432L);
If you want to read several IDs at once, you can.
List<Person> personList = dao.selectMany(Arrays.asList(432L,21L,7659L));
If you want all the records (presumably for a smallish table) just do
List<Person> personList = dao.selectAll();
Most of the time, you do not know up front what ID or IDs you are intersted in, so
hrorm provides ways to specify which records you are interested in.
One way is by using hrorm's ability to select by columns.
The idea of this interface is to use an instance of the entity class as a template
or key for providing the values you want to match.
Suppose we want to find all the records of people who are high school graduates and
weigh 100. Create an instance of the Person
object with those fields
set, as follows.
Person personTemplate = new Person(); personTemplate.setHighSchoolGraduate(true); personTemplate.setWeight(100L);
Then, we can find the matching records by passing that object and the names of the
columns to filter on to the selectManyByColumns
method on the Dao
.
List<Person> people = personDao.selectManyByColumns(personTemplate, "IS_HIGH_SCHOOL_GRADUATE", "WEIGHT");
Notice that hrorm wants the names of the database columns, not the fields on the object.
If we know that a particular query will only return 0 or 1 results, hrorm provides a convenience method for that.
Person personTemplate = new Person(); personTemplate.setName("Rumplestiltskin"); Person person = personDao.selectByColumns(personTemplate, "NAME");
If you use this method and hrorm finds more than one record, it will raise an exception.
The templating method above can be useful, but is not the most generic mechanism provided by
hrorm. To support a variety of predicates hrorm provides a Where
object that allows
construction of more complex filters than the exact matching of object fields.
A Where
object is a collection of predicates, possibly nested, joined by
the conjuctions AND and OR. A predicate is the name of a column, an operator, and a value.
To find all the people who are high school graduates who weigh 100, as above, set up
a where object like this:
Where where = new Where("IS_HIGH_SCHOOL_GRADUTE", Operator.EQUALS, true); where.and("WEIGHT", Operator.EQUALS, 100L);
Then we can pass that to the select method to get the results.
Various operators are supported, not just equality. So we can now find all the people whose names are like Mark and weigh between 75 and 125, as follows.
Where where = new Where("NAME", Operator.LIKE, "%Mark%"); where.and("WEIGHT", Operator.GREATER_THAN, 75L); where.and("WEIGHT", Operator.LESS_THAN, 125L);
We can also nest Where
objects, like so:
Where where = new Where("NAME", Operator.LIKE, "%Mark%"); Where weightCheck = new Where("WEIGHT", Operator.GREATER_THAN, 75L); weightCheck.and("WEIGHT", Operator.LESS_THAN, 125L); where.or(weightCheck);
This will generate SQL to find anyone who has a name that matches "Mark" *or* weighs between 75 and 125.
By using static imports, we can rewrite the above as follows:
List<Person> records = dao.select(where("NAME", LIKE, "%Mark%") .or(where("WEIGHT", GREATER_THAN, 75L) .and("WEIGHT", LESS_THAN, 125L));
Which some people may find more readable.
Check the Javadocs for a complete list of supported operations.
Building where clauses can be a bit tricky. The render()
method exports the actual SQL that will be generated by hrorm at run time.
Select methods that return a list of results are overloaded to allow passing
an Order
object.
Including and Order
object will result in SQL with an ORDER BY
added, including the column names you provide.
Of course, the ordering applied by the database may be different than that applied by Java.
After making changes to the state of the object, we can call
dao.update(person); connection.commit();
This will issue an update in the database based on the primary key (id
) field.
Updates will automatically propagate to children, but not to siblings.
When we are done with a person, we can issue
dao.delete(person); connection.commit();
To remove the record from the database, using the primary key, as with an update.
Deletes will automatically propagate to children, but not to siblings.
As mentioned above, AssociationDao
objects are a completely distinct interface from Dao
objects. There are only
a handful of supported operations.
Assuming we have two entities, a Movie
and an actor Actor
and we have built the association, we can create new ones as follows:
Connection connection = // made this from JDBC AssociationDao<Actor, Movie> associationDao = associationDaoBuilder.buildDao(connection); Movie legallyBlonde = // a persisted movie Actor reeseWitherspoon = // a persisted actor associationDao.insert(reeseWitherspoon, legallyBlonde); connection.commit();
If a connection needs to be removed, we use the delete()
method.
Actor laurenceOlivier = // another actor associationDao.delete(laurenceOlivier, legallyBlonde);
Associations work in both directions, but we have to take care with which type was on the left and which was on the right. (This has nothing to do with left or right joins, just the names for the two types being associated.)
List<Actor> cast = associationDao.selectLeftAssociates(legallyBlonde); List<Movie> career = associationDao.selectRightAssociates(reeseWitherspoon);
That's all there is to it.
Dao
objects provide the ability to run some SQL functions against
the database records. All the supported functions are single column aggregations,
like COUNT, SUM, AVG, etc.
To use them is simple, just pass the name of the function and the column you
are interested in to the appropriate run method. The functions can be run using
the same Where
objects as can be used for selects. To find the maximum
weight of everyone named "Mark" who is a high school graduate, we would run:
Long maximumMarkWeight = dao.runLongFunction(SqlFunction.MAX, "WEIGHT", where("NAME", Operator.LIKE, "%MARK%") .and("IS_HIGH_SCHOOL_GRADUATE", Operator.EQUALS, true));
See the SqlFunction
for the list of supported functions.
In addition to the insert
, update
, and delete
methods, hrorm Dao
objects provide variants of those methods
called atomicInsert
, atomicUpdate
, and atomicDelete
.
These are useful if you do not mind your changes being committed automatically. But they
cannot be used inside larger transactions. Additionally, these methods will close the
Connection
object their enclosing Dao
was built with.
Many ORMs (and other database connectivity tools) provide some mechanism for lazily loading data. This is a useful feature since it's quite common to work on data sets in your application that are too large for it.
The biggest problem with this approach is that it can lead to peculiar bugs when a connection is closed too early and something that was to be lazily loaded cannot be.
Hrorm's select methods are not lazy and so are only suitable for selecting limited
quantities of data. But, Hrorm does provide a way to run a select without instantiating
a list of all the found objects with the
foldingSelect()
method.
This method allows a general select to be done, and then a flexible folding operation
to take place on the database's returned result set.
At times, it might be more sensible to write the logic required as a select with a group by clause. But at other times you cannot express the application logic necessary in the database, and this facility exists for those times.
If you're unfamiliar with folding, here are some quick examples which show how it can be used. If you have an entity that exposes a long value and you wish to know the sum of all those values, you could write:
Long result = dao.foldingSelect(0L, (accumulator, entity) -> accumulator+entity.getLongValue(), new Where());
Which would be equivalent to writing:
Long result = dao.runLongFunction(SqlFunction.SUM, "LONG_COLUMN", new Where());
Another example shows how to build up a list of items.
List<Entity> entities = dao.foldingSelect(new ArrayList<>(), (list, entity) -> list.add(entity), new Where());
Which is equivalent to:
List<Entity> entities = dao.select(new Where());
These two examples are a bit silly, and clearly the alternatives are better. But, folding allows you to define an arbitrary accumulation function on whatever type you wish, not just adding sums or appending to lists.
The SQL
that hrorm generates to make a Dao
can be accessed by
calling the queries()
method. The SQL
is formatted
in a way suitable for passing to a PreparedStatement
with embedded
question marks for variable substitution.
You can also access a Queries
object by calling buildQueries()
on a DaoBuilder
object. This provides an identical instance, but does not
require a Connection
, which building a Dao
does.
Storing dates and times is important to many applications. Hrorm of course tries to support this endeavor, but user caution is advised.
Databases support a myriad varieties of dates and times, but the oldest and most established is the time zone free timestamp, which generally is some amount of time offset from some zero point. (Number of seconds, including fractional amounts, since 00:00:00 1-Jan-1970 or something.) What this means to you might vary.
The JDBC supports the timestamp concept above through the type java.sql.Timestamp
,
which is implemented as an extension of the widely hated (and mostly deprecated)
java.util.Date
. One reason (among many) that people dislike the java.util.Date
type is because it isn't even a date. It's a date and time. Except it's not that, it's a timestamp,
an offset from the epoch as above. So, already we are in trouble, since the primary means of
persisting date and time information of all sorts is through a mutable type of
deprecated methods of confusing meaning difficult to translate to the very good options
available in the java.time
package.
For a tool like hrorm, the most important thing is to expose functionality to the user in a very general way, which means using the JDBC. As always, the hrorm philosophy is to not try to be all things to all people, since the peculiarities of your needs and the underlying database capabilties are so diverse, it's better to just get out of the way. That said, for most of what you need dates and times for, timestamps work just fine, given some care. Yes, if you're just storing a date in your model, there's some precision you do not need, but so what? And if you want to make sure you remember what time zone or offset from UTC you need to attach to a persisted timestamp, you can use another column for that. Or, if your application always does things one way, you can hard code logic for conversions.
The java.sql.Timestamp
was retrofitted with converters to and from two
concrete classes of the java.time
package, Instant
and
LocalDateTime
. For a more modern framework like Hrorm, those are the
only two real contenders for what to expose on its interfaces.
LocalDateTime
is probably more useful for most applications. It has obvious
ways to access information like day of the week or hour of the day that are not directly
available from an Instant
. It's probably the go to choice when modeling domain classes
that involve date time values. With the caveat that if all you want is a date (or a time)
you can use a smaller more focused type.
Nevertheless, Hrorm chooses to expose java.time.Instant
. Why?
Because it creates better guarantees about storing the values you give it
and being able to recover them. Consider the following sad example.
int year = 2018; int month = 3; int day = 11; int hour = 2; int minute = 0; int second = 0; int nanos = 0; LocalDateTime localDateTime = LocalDateTime.of(year, month, day, hour, minute, second, nanos); Assert.assertEquals(2, localDateTime.getHour()); Timestamp timestamp = Timestamp.valueOf(localDateTime); Assert.assertEquals(3, timestamp.getHours()); LocalDateTime recoveredLocalDateTIme = timestamp.toLocalDateTime(); Assert.assertEquals(3, recoveredLocalDateTIme.getHour());
Leaving aside that the getHours() call is deprecated on Timestamp
,
on my computer and JVM, which is in the "America/Chicago" time zone, the
above tests all pass. (On my computer, but maybe not on yours!)
So, the time you thought you were putting into the
database is not the time you will get out.
What's happening? The time initially created does not really exist: at that
precise moment, the clocks sprang forward in "America/Chicago" and we
never had a 2AM on 11 March 2018. Despite neither of the classes involved
having a time zone component, conversion between them implicitly uses a
time zone, and changes the hour. It's perhaps unfortunate that the
LocalDateTime
allows construction of what is perhaps an illegal
instance, but it's not an illegal instance until it's combined with a time
zone. For all it knew at construction time, I was talking about that time
in some other time zone. (Perhaps the word "Local" was a bad choice. It means
local to somewhere, not local to where you happen to be. Maybe they should have
called in "UnzonedDateTime
"?)
The Instant
conversions have no such problem, but of course,
you have to be careful when your model converts from a date type to an instant
type, since pitfalls exist. (Again, the following tests all pass on my computer,
but perhaps not on yours!)
int year = 2018; int month = 3; int day = 11; int hour = 2; int minute = 0; int second = 0; int nanos = 0; LocalDateTime localDateTime = LocalDateTime.of(year, month, day, hour, minute, second, nanos); Assert.assertEquals(2, localDateTime.getHour()); Instant instant = Instant.from(ZonedDateTime.of(localDateTime, ZoneId.systemDefault())); LocalDateTime recoveredLocalDateTime = LocalDateTime.ofInstant(instant, ZoneId.systemDefault()); Assert.assertEquals(3, recoveredLocalDateTime.getHour());
What hour does the Instant
show? The question makes no sense: an
instant only has an hour within the context of a particular time zone.
So, Hrorm tries to follow it's philosophy as best it can: there is a problematic
issue exposed, but it's up to the client code to solve it. But at least what hrorm
does is predictable, the Instant
you choose to store will be
the Instant
that is returned to you.
Hrorm provides a mechanism for generating your database schema through its
Schema
object. Using it is simple. Pass your DaoBuilder
objects to the Schema
constructor and call the sql()
method to get a String
of the SQL to generate tables, sequences,
and constraints.
There are good reasons not to use the schema generated by Hrorm for your actual database. For instance, the SQL Hrorm generates uses lowest common denominator types for columns. You may well wish to specify the precision of numbers and the lengths of strings.
Nevertheless, this can be a helpful feature. If your schema is under version control in a separate system from your code, it might be helpful for running tests. If you are doing development and rapidly changing schema, it can save you the trouble of writing and changing the SQL in tune with your object model. And you can always use the SQL Hrorm generates as a starting point, adding whatever refinements to types and other things you require by hand.
Hrorm thinks that checked exceptions are a mistake. In an application with a database dependency, you have three choices:
SQLException
declared in most methods all over your application.SQLException
somehow when doing interactions with
Connection
,
Statement
, and ResultSet
objects, defeating the purpose of
exception handling being centralized and removed from normal application flow.
SQLException
to some other type, descended from
RuntimeException
.
Hrorm opts for method 3.
Hrorm will throw a HrormException
when it has a problem. If there was an underlying
SQLException
that will be exposed on the HrormException
.
The state of Java logging is a tiny bit unfortunate.
The java.util.logging
package in the standard library is not widely
used. Unfortunately, rather than a set of pluggable interfaces, they provided a concrete
implementation.
Log4j is pretty ubiquitous, but not universal. Additionally, hrorm currently has no dependencies. It would be a shame to add one.
Hrorm therefore uses java.util.logging
implementation.
There are ways to redirect that to log4j and other logging frameworks.
Hrorm logs to a logger named "org.hrorm" all the SQL it issues at INFO level.
Hrorm is designed to have a small surface area for clients. Most of the time, clients should only
need to interact with a few hrorm types: DaoBuilder
, Dao
,
Where
, HrormException
, and a couple of others.
There are a few other types that a client may want to use,
but most of the classes in hrorm are internal to it.
In spite of this, almost all the classes in hrorm are public and contain public constructors
and methods. If you feel like instantiating a JoinColumn
object, hrorm feels
no need to try to stop you.
Most of the time, hrorm will point out in the Javadocs where classes are not intended for clients to use directly.