-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BulkInsert issue with changing database schema #72
Comments
Hello @riedd2 , How many schemas do you have? If you only have 2-3 schema, one way is to create a context that inherits the main context for every schema. So every context/schema is now unique which makes everything easier. If it doesn't work for you, let me know more about your current scenario and we will look at what we can do. Best Regards, Jon |
Hey @JonathanMagnan Thanks for your reply. I'm creating tables based on json schemas (we have a lot of them). |
Thank you @riedd2 , Before we start to see if we can find a solution, we have one last question: Are you using purely the |
I'm sorry I should have mentioned that from the beginning. I hope this answers your question. |
Hello @riedd2 , I did a follow-up with my developer today as we are late on this request and we are currently not sure how to handle this scenario. We probably don't fully understand what you mean by you are running in parallel multiple bulk operations on different versions of the schema (do you mean table here or really a schema such as The current schema/table information is "cached" in a If we make this property public: public class InformationSchemaManager
{
// will become public
internal static ConcurrentDictionary<string, Table> MemoryTable = new ConcurrentDictionary<string, Table>(); Will it be enough for you as you will be able to set your own implementation of the |
Hey @JonathanMagnan, Thanks for your response. Scenario 1
Scenario 2
These scenarios run in parallel on different in-memory databases and (if I understand correctly) BulkOperation will cache the schema of table "example" from whichever scenario runs first. If scenario 1 has run first, the bulk insert on scenario 2 will ignore the additional column, since the schema / table information is cached from scenario 1. This issue bubbled up in our tests surrounding database (schema) evolution, e.g. testing logic against previous version as well as the current one. But this could also happen in a production scenario. I think with you proposed solution we should be able to address the issue in our case. Thanks for the help. |
Oh thank you, now everything makes sense if you use an |
Yes, the issue can occur quite easily in the test scenario using |
Hello @riedd2 , Unfortunately, the idea to make the dictionary public has not been accepted. However, my developer added the option You can disable the cache per operation or globally: BulkOperationManager.BulkOperationBuilder = builder => { builder.DisableInformationSchemaCache = true; }; So whenever you don't want to use the cache, you can now disable it. Could this new options work for your scenario? Best Regards, Jon |
Hey @JonathanMagnan Sorry for the late reply. In general, the proposed solutions should address the issue. Assuming With the option above, we will need to disable the cache in general or at least once the database structure changes for the first time. This means that we will lose the benefit the cache provides in general, not sure how much of an impact this will be. If we cannot implement / access the cache, would it be possible to have an option to just clear the cache on demand? We could do this once we rebuild the database and let operation start caching again with the new schema. Thank you for your help. |
Hello @riedd2 , To clear the cache, you have the method |
Hey all,
I'm using
BulkInsertAsync
in parallel with different / changing table schemas.I run in to some problems because the framework caches the table schema. I found the following relate issue: #24
Unfortunately, the static
InformationSchemaManager.ClearInformationSchemaTable();
workaround does not work in our scenario.I would like to implement a custom schema cache to prevent / control the schema caching. However, the approach mentioned in the linked ticket is no longer available.
Can you advise me in this scenario?
Thanks for the help.
Kind Regards
David
The text was updated successfully, but these errors were encountered: