Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PostgreSQL backend #151

Closed
wants to merge 39 commits into from
Closed

Conversation

mikfire
Copy link
Contributor

@mikfire mikfire commented Jan 26, 2016

This is NOT ready for a full merge into master, but I wanted to raise awareness and I need some help.

This branch enables brewtarget to use PostgreSQL as its backend database. As it stands, I've implemented about 90% of what we need. I've added the configuration screens, I've added the dialogs, I've made DataSchemaHelper work, I've refactored that monstrosity into something a bit more usable, I've added functionality to import from SQLite to PostgreSQL.

Why? In a very simple word, mobile. I tried for months to get the SQLite file to sync between my mobile device and my desktop and failed. I started work on this branch on Thursday, and have enough of a working prototype to publish. If we get the data off the device, mobile becomes trivial.

Directions for setting up postgreSQL can be found in data/POSTGRES.markdown. The file also explains some of the reasons behind my doing what I did.

I need help with:

  • the ability to merge an updated SQLite database into an existing PostgreSQL database;
  • the interface is still very rough around the edges -- the configuration screen is way too chatty for example;
  • the import is really fragile. I've found some glaring inconsistencies in the sqlite database, like fields with a null name that are defined as non-null, like fields specified as varchar(256) with 300+ characters in them, etc. Right now, those break the import. I need to know if those are simply artefacts in my database or if they are widespread;
  • the import works in one direction: SQLite -> PostgreSQL. Do we need to do the reverse? I think I can, and likely will;
  • there are some weird bits in the code -- notice what I had to do to unload and why;
  • should we subclass Database and DatabaseSchemaHelper? Should we create a another low-level class to handle the differences between SQLite and other dbs (the booleans killed me, and don't even start with timedate stamps).

Things I haven't tried:

  • honestly, I haven't had the courage to actually create anything new. That will happen tomorrow.

Things to consider:

  • Any backups have to be done from postgresql, not brewtarget;
  • No more backup databases, which is a plus and a minus;
  • I still suspect I should have probably subclassed something, and I am afraid adding a third RDBMS solution (oracle, db2, mysql, etc) will really break things. So far, I tried to make it as extensible as I could, but ....

Mik Firestone added 10 commits January 21, 2016 02:15
So far, so good.

I have translated the schema from SQLITE to PostgreSQL and I
have some perl code that can take an arbitrary SQLITE database and import the
information to PostgreSQL. That took some work. SQLite doesn't seem to enforce
size limit but postgres does.

I have modified the database classes to connect to postgresql using parameters
in the config file. That took some doing -- there were a number of backticks
that caused problems. It also seems postgres cares more about type conventions
-- enclosing numbers in single ticks caused problems.

So far, I can load the database into brewtarget and read. Writing seems to
kind of work, but there are error messages. Closing dumps core, because it
looks like we are trying to free memory that isn't done yet.

But for one day's effort, I'm pleased.
Oops. Forgot to add these to the first commit.
Realized I needed to reset the indices after the import, so added code to do
that.

Started working on my own database, and realized there were problems with
fermentDate field. Fixed that, but I don't like the fix. Instead of using the
date when displayed, it is using the date when the query was run. This could
have some weird side effects.
DatabaseSchemaHelper was kind of a mess. I've refactored, re-organized, etc.
It looks a lot better.

It also occurred to me that we won't always just be support SQLite and
PostgreSQL. So I refactored a lot of things from previous checkins to support
multiple RDBMS not just two.
DatabaseSchemaHelper has been massively refactored. It can at least create and
empty database now.
DatabaseSchemaHelper now creates an empty database in postgresql as well.
In reverse order of the summary. I was finally able to get the
QSqlDatabase::tables method to return properly. I made a beginner's mistakes
and confused pass-by-copy with pass-by-reference. Both databases now
initialize properly on startup, and don't initialize when the tables exist.

The cores were a more interesting problem. It would seem PostgreSQL doesn't
like to be closed and/or removed. The driver itself properly takes care of
that when it is destroyed. Or so it seems. I modified unload() so it only
works for SQLite, and the cores went away.
Added a new tab to the options dialog for configuring the rdbms. If you change
the settings, you cannot save until you successfully test the connection. That
required me writing some code to ... well, test the connection.

This work flow is annoyingly chatty. I pop way too many dialogs, but telling
the user what went wrong is important, as is feed back on it going right.
Hopefully, somebody else will weigh in on this.

We need to think about merging the directory tab with the database tab. They
sort of configure the same thing, so I think they are the same idea. I think I
can do that -- enable only those fields that are relevant to that rdbms, with
a few text changes on the labels should get it done.
Well. It looks like I can import (maybe) an sqlite database into postgres.
There was some oddity in my database, and I am uncertain how widespread it is.

I think this may be ready for upload.
@mikfire
Copy link
Contributor Author

mikfire commented Jan 26, 2016

I think I may have left some extra commas in the table definitions in DatabaseSchemaHelper::create() for the id columns. I will double check and fix as required.

I also need to fix the instructions -- the perl script was translated into Database::convertSqliteToPostgress() and should happen automatically.

@rocketman768 rocketman768 changed the title Skunks/postgres [WIP] Skunks/postgres Jan 26, 2016
Right now, there is an annoying error message when creating a brewnote.
postgres 9.4 and earlier to not support any kind of "insert or update" syntax
-- frequently called "upsert" in postgres terminology. Postgresql 9.5 does
have an upsert syntax but postgresql 9.5 was released Jan 6, 2016. It isn't
wide spread yet. Unless somebody really screams, I am saying we support 9.5
and forward.

This meant I had to install a new server, which lead me to wondering how I was
going to copy the data, which lead me to modifying convertSqliteToPostgres()
to simply be convertToPostgres(), which lead me to refactoring that code a few
times over. So instead of solving the original bug, I fixed something
completely different and will fix the first bug tomorrow.

I also updated the instructions to remove the perl references.
@mikfire
Copy link
Contributor Author

mikfire commented Jan 27, 2016

I am saying we support only postgres 9.5 and later at the moment. I need that version for the upsert functionality. I can change that, but we lose some easy function for the inventory tables.

Right now, I copy the data from the old DB to the new DB on startup. It occurs to me that there is no reason I couldn't do that as soon as the options are set. Any preferences? I am sort of leaning towards sooner than later.

Mik Firestone added 3 commits January 27, 2016 10:05
Accidently changed a test from true to false. This fixes that mistakes
Lots of things got jammed into this.

It mostly works now. You can create recipes and brewnotes now, and it all
looks good. I need to test creating other objects, but that should all work
now.

I changed the database conversion from a post-process to a pre-process. It
made more sense, but it took a lot of effort. The cast majority of this change
is doing that. I really want to try simply re-opening the db to see if we can.

We don't save the database password by default any more. If you decide to save
it, a warning is popped saying you are doing so at your own risk. If you
choose not to, we should prompt at start up for the password. I haven't tested
that.

DatabaseSchemaHelper has been updated to provide useful feed back if something
fails. Every create method. That hurt.

I need to get the updates working next and the merges.
Tested things without saving the password. The prompt pops, and it even
prompts twice if you typo the password. I meant to test that :)

A few minor changes to the logic. Deselecting the save password doesn't
require testing the connections, so that had to be reworked. I would like to
make it more obvious that the test connection button has been disabled.

I *think* the db merge just works. I was at least able to do the whole
populating child tables update, which makes me happy. If I read the rest
properly, then it should be good. I think I can test this, so will before the
next checkin.
@@ -144,6 +145,14 @@ class Brewtarget : public QObject

};

//! \brief Supported databases. I am not 100% sure I'm digging this
// solution, but this is more extensible than what I was doing previously
enum DBTypes {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DBType

@rocketman768
Copy link
Member

So, the changes to DatabaseSchemaHelper.cpp are basically a rewrite of the whole file. Can you summarize to help me try to reread and reunderstand the whole thing?

@mikfire
Copy link
Contributor Author

mikfire commented Jan 28, 2016

Not so much a rewrite, as it was a reorg with enhanced error reporting. The create() method was 600+ lines with no error reporting. If something went wrong, you had no idea what and it was a horrible time finding it. Please believe me, because it did and I didn't and it was.

In defense of my own sanity, I split creating the tables into 6 pieces: core (aka, BeerXMLElements), bt_* tables, *_in_recipe tables, triggers, children tables and inventory tables.

Each core table got its own method for creating it. They are too unique to bundle up. The other tables were more repetitive and got one method that took the correct parameters to create the right table. The triggers are just annoying and have to be handled uniquely. Every create call now throws an error message if something fails. I would honestly prefer to stop all processing when an error is found, but I haven't had the patience to do that yet.

I also had to defer some of the strings for a late evaluation. I don't know some values at initialization time -- mostly the database type -- so they had to be set when create() is called. It got more complicated when I moved the database transfer from post to pre.

migrateNext() just got split into its logical subcomponents. Since I was already there, the way to split it up was obvious and the size of migrateNext() is a strictly increasing function, I thought I should take the opportunity to set that right. I really do not like any method longer than 50 lines. It sometimes cannot be avoided, but that doesn't mean we shouldn't try. One day, I will get annoyed and split up MainWindow(), and it will be a very satisfying 3 days of janitorial work .

Is there any chance to argue you out of the DBTYPE_ prefix? I can put a few forward, but if there's no chance I will not waste my time. I will fix the rest.

Mik Firestone added 2 commits January 28, 2016 17:27
This is getting progressively closer to a complete rewrite of
DatabaseSchemaHelper. mea culpa.

This started because I didn't like the list I had embedded in convertDatabase.
I was able to get how the field was defined from the QSqlField object, so that
should have made it easy. It didn't, because there was no definitive list of
tables in our database. Brewtarget::DBTable only had about half of them.

That started me down a long, long chain of how to do this such that it can be
maintained and makes adding tables as easy as possible. It was also important
that I be able to use this information to properly rebuild a database, which
means order is important. This checkin is about 70% of that solution.

I've added a new table to the database called bt_alltables. Each row is a name
of a table, the value of that table in Brewtarget::DBTable and three booleans
that will affect how the information gets used. The table is created, but I
still need to do the work in Database to take advantage of it. That will be
the next checkin. Nothing is broken with this checkin, but it is really only
half functional.
This took far more effort than I would have actually expected, all to simply
remove a few hardcoded lists.

The metatable has been reworked to actually fit how it is being used. That
accounts for (some) of the changes to DatabaseSchemaHelper. Most of the rest
are just trying to make it a bit more consistent.

Since I am adding a new table, I increased the database version. Which means I
added a new migrate_ method.

I tweaked the option dialog a little bit to make sure I don't save the
password if the upgrade fails, and to find a color for the "test connection"
button I like and can read. Bright yellow fails the second criteria.

The major changes are to database.cpp. Some lists are no longer hard coded,
which make me happy but I'm not sure it was worth the effort. It does mean you
only need to add the table into DatabaseSchemaHelper and add a new value into
the Brewtarget::DBTable enum and the rest should just work.

There's more testing to be done tomorrow. I really, really need to find
every place we write to the database and do something we've never done before
-- check return codes. Now that we are on the network, all things can fail and
we should probably at least log some of them.
@mikfire
Copy link
Contributor Author

mikfire commented Jan 29, 2016

This last one gets real close to done. I need to just make sure I haven't broken anything when using SQLite, and I really would like to get the postgresql -> sqlite conversion working.

I need a lot more error checking all through the code. Every place we touch the database needs to make sure the operation succeeded; networks can fail at any time and we should catch that.

@rocketman768
Copy link
Member

This is something Brewtarget needs for its future, so thank you for this.

I read most of it. It is huge though, so it's hard for me to offer any substantive comments. I'm sure I have failed to notice some bugs, so all I can ask is: does it work and what validation did you do? We will obviously need a good test plan to make sure there is no path to corrupt user data now that the backup db is gone.

Thanks for modularizing the schema helper. I agree with you: shorter readable functions are better.

Do we need to subclass Database/DatabaseSchemaHelper? That is not clear to me. If we ever decide that support of a very different database type is necessary, then it should become clear at that point. For now, meh. Maybe others' workflow is different, but I always write first and refactor later when I find myself copying too much code.

Is this still a Work In Progress? What else do you need?

@mikfire
Copy link
Contributor Author

mikfire commented Feb 1, 2016

The validation question is really hard to answer. I validated with my
real sqlite database. Not the default one, but the one that has my
brewing data since my first use of brewtarget around v1.2 days. I know it
works on my data. Don't know if it will work on yours. I am not usually a
good test subject, but my database is as dirty as it gets -- I've lost
track of the number of half baked ideas that have made an absolute mess of
my db.

The process is non-destructive. If it doesn't work, you just need to delete
some values from your brewtarget.conf file and it all reverts to sqlite and
no damage done. I've lost track of the number of times I've practised this
and it works. I've scattered enough log statements that if it does go
wrong, I can likely figure out what the malfunction was from those logs.

What I need.

  1. I would really like somebody else to try this. It's non-destructive. Set
    up a pgsql database and give it a run. Let me know what doesn't work. Take
    me one step better, and do something on AWS.
  2. I need to make a method for converting from pgsql->sqlite. In my
    scheming for mobile, I have a use case for easily converting between the
    networked and non-networked dbs.
  3. Error handling and transactions. When your db is a local file, you can
    make any number of assumptions about things working. When its a networked
    system, you can't assume anything worked because it can all fail. I have
    been looking at this weekend, and it is a very hard problem.
  4. Merging. Still haven't tested, still gives me sweats. This is closely
    related to item 2.

Having said that. I brewed with the new code today, using pgsql as my
backing store. It worked without a hitch. I ran it side-by-side with what I
use for my real data and it all seems to have worked. I am likely going to
transition to the online version very soon. Once there, mobile is only a
build chain away.

Mik

On Sun, Jan 31, 2016 at 3:58 PM, Philip Lee [email protected]
wrote:

This is something Brewtarget needs for its future, so thank you for this.

I read most of it. It is huge though, so it's hard for me to offer any
substantive comments. I'm sure I have failed to notice some bugs, so all I
can ask is: does it work and what validation did you do? We will obviously
need a good test plan to make sure there is no path to corrupt user data
now that the backup db is gone.

Thanks for modularizing the schema helper. I agree with you: shorter
readable functions are better.

Do we need to subclass Database/DatabaseSchemaHelper? That is not clear to
me. If we ever decide that support of a very different database type is
necessary, then it should become clear at that point. For now, meh. Maybe
others' workflow is different, but I always write first and refactor later
when I find myself copying too much code.

Is this still a Work In Progress? What else do you need?


Reply to this email directly or view it on GitHub
#151 (comment)
.

In a world of ninja v. pirate, I pilot a Gundam

Mik Firestone added 2 commits February 1, 2016 16:51
I promised to make sure things still worked with the default system. They
didn't, but they do now.

The biggest problems were that I was using the existence of the settings table
to determine if the database was initialized. Unfortunately, the settings
table didn't exist until version 4, and we ship an earlier db. It caused
issues. I've since fixed that by assuming that an uninitialized db will have
no tables in it. This could break.

The other problem was a bit more devious. The upgrade to version 210 uses
create_childrenTables() and create_inventoryTables(), which ultimately try to
put a line into bt_alltables, which isn't created until the upgrade to version
6. I added a simple boolean to stop that behaviour. I am not certain the
bt_alltables was a good idea.

I finally "added" the methods to go from pgsql->sqlite. I realized I just
needed a bit more thought in some existing methods, a little renaming and the
same code that copied from sqlite->pgsql can do the work. It does require an
*empty* sqlite database. Not sure if I should put something in place to catch
that or not.

I did a bit of interface tweaking to make this all happen.
Removing the things from data that didn't need to be there
Mik Firestone added 3 commits February 14, 2016 00:10
Slowly working my way through the addToRecipe methods. I thought about moving
them to Recipe, and I'm still thinking of it. The syntax would be cleaner --
instead of Database::instance().addToRecipe(recipe, thingy) it would be
recipe->add(thingy). Just need to figure out if I can.

But not on this commit.
Slowly working my way through database.cpp and adding transaction boundaries
where needed. It is slow and tedious. I am currently not planning on chasing
any of this upstream.

I also made sure every time you add something to a recipe, makeDirty() is
called. It was a little hit or miss.

I tried to see what happens if the database goes down while in mid session.
Doing a forced stop isn't pretty -- cores get dumped and the stack trace makes
me think we will have to rethink our approach.
tl;dr is don't call recalcAll inside a transactional boundary.

So. Transaction boundaries are tough, mostly because we've some really
squirrely calls going on. I would really love to untangle the calls, but that
will be a few month's effort that I don't feel like devoting.  We really suck
at return codes. I'm dumping core left, right, sideways and center because
database methods now return NULL when we never expected it. And yes, I am at
least as guilty as anybody, and likely more so. The tree models are going to
hurt.

The best example of the squirrely is setDisplay(false) anywhere in the
Database class. This calls BeerXMLElement::setDisplay, which calls
BeerXMLElement::set() which calls Database::updateElement(), which calls
SetterCommand::append() and then SetterCommand::redo(), which calls
SetterCommand::setterStatements() before writing to the database. In this
example, it is almost impossible to figure out where the transaction
boundaries are, who should open them and when.

It is also somewhat ...  useless. The point of the SetterCommand class is to
provide undo (which I still don't think can be made to work). If we are
importing something, or copying something or ... we don't need to push
modifying the display onto the undo stack -- deleting the created element is
enough.

The other example is something like Recipe::recalcAll(), which we use an
obscene number of times everywhere. This isn't really bad, but I am not really
sure the Database class should be doing that.  recalcAll() will save the
recalculated values to the database, which calls Recipe::set, which calls
BeerXMLElement::set()... and wreaks all sorts of havoc with the transaction
boundaries. And do we really have to? We emit a bazillion signals that should
cause the right thing to happen without invoking the overhead that is
recalcAll.
@mikfire
Copy link
Contributor Author

mikfire commented Feb 16, 2016

We really need to spend some time untangling the cross-class calls. I don't think any of it actually required, and it is making the transactional boundaries hard to figure out.

@mikfire
Copy link
Contributor Author

mikfire commented Feb 20, 2016

Ok. I have a problem I need others helping on. I am trying to test error cases, and I keep getting core dumps.

To reproduce, build this branch, start brewtarget and then stop the postgresql database. Bad things will happen, and core will be dumped. The problem is I can't figure out why. Here is what I think I know:

  1. Stopping the database causes the postgresql driver to emit some signal (I think)
  2. Qt catches that signal and starts refreshing everything, starting with the fermentable table
  3. QSortFilterProxyModel->data(fermnamecol) is called, which calls fermentabletablemodel->data(fermnamecol), which calls database->get(name)
  4. Database->get returns empty, fermentabletablemodel->data return empty
  5. QSortFilterProxyModel dumps core.

I can catch the error in step 4, but I can't seem to do anything to stop step 5. Is it okay to throw a warning message at step 4 and simply close BT? If it isn't, can anybody else figure out why this is coring?

Found some surprises in BeerXMLElement that suddenly made what I wanted
possible. I am also wondering if I should be playing harder w/
try-throw-catch. I should likely be using those to propogate errors up the
stack.
@theophae
Copy link
Contributor

I wanted to help by doing some test, but I really don't know how to set up the PostgreSQL DB. I've installed postgres-9.5 on my Debian and I've fought a bit against the basic configuration. Now I succeeded to create user with the necessary right to create a DB, and I've create an empty DB. But I don't know what to do next.

@ghost
Copy link

ghost commented Feb 20, 2016

I've not fixed the crash but I may have fixed another error I see.

[12:07:38.644] ERROR : bool Database::getElements(QList<T*>&, QString, Brewtarget::DBTable, QHash<int, T*>, QString) [with T = MashStep] could not execute query: SELECT id as id FROM mashstep WHERE mash_id = 51 AND deleted = 0 : ERROR:  operator does not exist: boolean = integer
LINE 1: ...CT id as id FROM mashstep WHERE mash_id = 51 AND deleted = 0
                                                                    ^
HINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.

Apologies for the inline patch but Firefox keeps crashing when I attach it:

diff --git src/brewtarget.cpp src/brewtarget.cpp
index 13b7584..910271f 100644
--- src/brewtarget.cpp
+++ src/brewtarget.cpp
@@ -542,7 +542,7 @@ QString Brewtarget::dbFalse(Brewtarget::DBTypes type)
    QString retval;

    if ( whichDb == Brewtarget::NODB )
-      whichDb = (Brewtarget::DBTypes)option("dbType", Brewtarget::SQLITE).toInt();
+      whichDb = (Brewtarget::DBTypes)option("dbType", Brewtarget::dbType()).toInt();

    switch( whichDb ) {
       case SQLITE:

@mikfire
Copy link
Contributor Author

mikfire commented Feb 20, 2016

You are almost done. Just build this branch, then go to tools->options and
go to the database tab. Change the dbtype from the drop down, fill in the
information, test the connection, and follow the prompts.

I did write a quick howto in dev-doc/POSTGRES.markdown that may be of some
help.

Mik
On Feb 20, 2016 08:24, "theophae" [email protected] wrote:

I wanted to help by doing some test, but I really don't know how to set up
the PostgreSQL DB. I've installed postgres-9.5 on my Debian and I've fought
a bit against the basic configuration. Now I succeeded to create user with
the necessary right to create a DB, and I've create an empty DB. But I
don't know what to do next.


Reply to this email directly or view it on GitHub
#151 (comment)
.

@theophae
Copy link
Contributor

I did write a quick howto in dev-doc/POSTGRES.markdown

It seems that you've never committed it ^^

My problem seems to be at library level, because I'm getting this message:
QSqlDatabase: QPSQL driver not loaded

@mikfire
Copy link
Contributor Author

mikfire commented Feb 20, 2016

Oops. Color me embarrassed. Doesn't do me any good to write docs if nobody
side can read them.

I'll get that checked in tonight.

Mik
On Feb 20, 2016 11:14, "theophae" [email protected] wrote:

I did write a quick howto in dev-doc/POSTGRES.markdown

It seems that you've never committed it ^^


Reply to this email directly or view it on GitHub
#151 (comment)
.

Mik Firestone added 4 commits February 20, 2016 23:02
As I said previously, it doesn't do much good if I write docs and nobody can
read them. This fixes that
In one of my previous checkins, I mentioned I should maybe get a book on c++
and read it? Well, I did something close -- I googled and read a lot of blog
entries. I've changed the catch-throw mechanism so that it just keeps throwing
the errors up the stack. Sooner or later, I will get either to MainWindow or
Brewtarget::run and let it handle the errors. It works *much* better now and
reads well too. I have had cause to test it a lot.

As a bonus feature on this checkin, I figured out how to collapse the entire
Database::remove() into a single, neat template. That awful select statement
just grated on me. It did require me changing all of the deleted*Signals (eg,
deletedHopSignal) into a single, overloaded deletedSignal.

As a second bonus on this checkin, I fixed yet more in DatabaseschemaHelper.
Somewhere around line 1105, DatabaseSchemaHelper::migrateNext() says "
Drop and re-create children tables with new UNIQUE requirement". It drops the
tables, and recreates them using childrenTable(). Odd thing. childrenTable()
never set a unique on any field. Took me some digging to figure that one out.
SQLite seems to accept a lot of constraints and then ignores them.

Migrating to postgreSQL should work again.
Just working my way down database.cpp.
Sometimes, cleaning things up feels better than it should. SetterCommand is no
more. In consulting with rocketman, we both felt that what SetterCommand
original intended to do -- provide an undo stack -- simply wasn't possible in
its current form. Instead, it just made an already fairly complex part of the
code more complex and was wreaking havoc on the transactional boundaries.

So I moved the necessary parts to Database::removeRecord and
Database::updateEntry and removed SetterCommand.

This checkin is stable and is known to work against postgresql. I still need
to take a few minutes and test against sqlite. My testing is to: copy a recipe
(which tests out almost every table we have), brew it and then change the boil
time.
@mikfire
Copy link
Contributor Author

mikfire commented Feb 24, 2016

@EvansMike what were you doing when that error was generated? It implies there is one more place where I missed a boolean being misused, which will depress me but not surprise me. That your patch fixed the problem is more surprising.

PS - But your patch is correct. Thanks!

Mik Firestone added 2 commits February 23, 2016 23:43
Just a small checking to update the documentation a little.
Thanks to @EvansMike for finding this one. dbTrue() and dbFalse() weren't
paying proper attention to the already defined _dbType variable. This should
fix that.
@ghost
Copy link

ghost commented Feb 24, 2016

Hi mikfire
I was testing your postgres code to help with the crash, but as to exactly what I as doing? I forget. Glad to be of help though.
Mike

Mik Firestone added 2 commits February 25, 2016 00:29
I don't feel like doing heavy work tonight, so I did some light work.

MainWindow was changed to disable the save, backup and restore menu options
when you are using postgresql. I also added a tool tip to the Brewtarget logo
to show information about which database you are using.

I also looked at some of the changes I made previously, and didn't like them.
I specifically have a variable that is initialized when we load the config
file to say what the db type is. So I use that if possible.

Finally, I added some minor bits to make sure we don't try to clean up the
postgresql database on exit.
Finished making Database do try/catch. A minor change was requried in
OptionsDialog to support these changes. I also made a few methods return void
instead of bool.

I am a little less confident in the changes I made to sqlDatabase(). But it
seems to me we are opening a new connection, and rather have to supply the
connection information. Although it worked previously, so maybe we don't? It
runs, and seems to do what it should.
@mikfire
Copy link
Contributor Author

mikfire commented Mar 2, 2016

Right. I think the core is done. Anybody brave enough to merge this?

@rocketman768 rocketman768 changed the title [WIP] PostgreSQL backend PostgreSQL backend Mar 2, 2016
@rocketman768
Copy link
Member

Done. A few months remain for further testing and fixups.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants