isolation, propagation

Can someone explain what isolation & propagation parameters are for in the @Transactional annotation via real world example. Basically when and why I should choose to change their default values.


Good question, although not a trivial one to answer.

Propagation

Defines how transactions relate to each other. Common options

  • Required : Code will always run in a transaction. Create a new transaction or reuse one if available.
  • Requires_new : Code will always run in a new transaction. Suspend current transaction if one exist.
  • Isolation

    Defines the data contract between transactions.

  • Read Uncommitted : Allows dirty reads
  • Read Committed : Does not allow dirty reads
  • Repeatable Read : If a row is read twice in the same transaciton, result will always be the same
  • Serializable : Performs all transactions in a sequence
  • The different levels have different performance characteristics in a multi threaded application. I think if you understand the dirty reads concept you will be able to select a good option.


    Example when a dirty read can occur

      thread 1   thread 2      
          |         |
        write(x)    |
          |         |
          |        read(x)
          |         |
        rollback    |
          v         v 
               value (x) is now dirty (incorrect)
    

    So a sane default (if such can be claimed) could be Read Comitted , which only lets you read values which have already been comitted by other running transactions, in combination with a propagation level of Required . Then you can work from there if you application has other needs.


    A practical example where a new transaction will always be created when entering the provideService routine and completed when leaving.

    public class FooService {
        private Repository repo1;
        private Repository repo2;
    
        @Transactional(propagation=Propagation.REQUIRES_NEW)
        public void provideService() {
            repo1.retrieveFoo();
            repo2.retrieveFoo();
        }
    }
    

    Had we used Required instead the transaction will remain open if the transaction was already open when entering the routine. Note also that the result of a rollback could be different as several executions could take part in the same transaction.


    We can easily verify the behaviour with a test and see how results differ with propagation levels

    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations="classpath:/fooService.xml")
    public class FooServiceTests {
    
        private @Autowired TransactionManager transactionManager;
        private @Autowired FooService fooService;
    
        @Test
        public void testProvideService() {
            TransactionStatus status = transactionManager.getTransaction(new DefaultTransactionDefinition());
            fooService.provideService();
            transactionManager.rollback(status);
            // assert repository values are unchanged ... 
    }
    

    With a propagation level of

  • Requires new we would expect fooService.provideService() was NOT rolled back since it created it's own sub-transaction.

  • Required we would expect everything was rolled back and backing store unchanged.


  • PROPAGATION_REQUIRED = 0 ; If DataSourceTransactionObject T1 is already started for Method M1.If for another Method M2 Transaction object is required ,no new Transaction object is created .Same object T1 is used for M2

    PROPAGATION_MANDATORY = 2 ; method must run within a transaction. If no existing transaction is in progress, an exception will be thrown

    PROPAGATION_REQUIRES_NEW = 3 ; If DataSourceTransactionObject T1 is already started for Method M1 and it is in progress(executing method M1) .If another method M2 start executing then T1 is suspended for the duration of method M2 with new DataSourceTransactionObject T2 for M2.M2 run within its own transaction context

    PROPAGATION_NOT_SUPPORTED = 4 ; If DataSourceTransactionObject T1 is already started for Method M1.If another method M2 is run concurrently .Then M2 should not run within transaction context. T1 is suspended till M2 is finished.

    PROPAGATION_NEVER = 5 ; None of the methods run in transaction context.

    An isolation level: It is about how much a transaction may be impacted by the activities of other concurrent transactions.It a supports consistency leaving the data across many tables in a consistent state. It involves locking rows and/or tables in a database.

    The problem with multiple transaction

    Scenario 1 .If T1 transaction reads data from table A1 that was written by another concurrent transaction T2.If on the way T2 is rollback,the data obtained by T1 is invalid one.Eg a=2 is original data .If T1 read a=1 that was written by T2.If T2 rollback then a=1 will be rollback to a=2 in DB.But,Now ,T1 has a=1 but in DB table it is changed to a=2.

    Scenario2 .If T1 transaction reads data from table A1.If another concurrent transaction(T2) update data on table A1.Then the data that T1 has read is different from table A1.Because T2 has updated the data on table A1.Eg if T1 read a=1 and T2 updated a=2.Then a!=b.

    Scenario 3 .If T1 transaction reads data from table A1 with certain number of rows. If another concurrent transaction(T2) inserts more rows on table A1.The number of rows read by T1 is different from rows on table A1

    Scenario 1 is called Dirty reads.

    Scenario 2 is called Non-repeatable reads.

    Scenario 3 is called Phantom reads.

    So, isolation level is the extend to which Scenario 1, Scenario 2, Scenario 3 can be prevented. You can obtain complete isolation level by implementing locking.That is preventing concurrent reads and writes to the same data from occurring.But it affects performance .The level of isolation depends upon application to application how much isolation is required.

    ISOLATION_READ_UNCOMMITTED :Allows to read changes that haven't yet been committed.It suffer from Scenario 1, Scenario 2, Scenario 3

    ISOLATION_READ_COMMITTED :Allows reads from concurrent transactions that have been committed. It may suffer from Scenario 2 and Scenario 3. Because other transactions may be updating the data.

    ISOLATION_REPEATABLE_READ :Multiple reads of the same field will yield the same results untill it is changed by itself.It may suffer from Scenario 3.Because other transactions may be inserting the data

    ISOLATION_SERIALIZABLE : Scenario 1,Scenario 2,Scenario 3 never happens.It is complete isolation.It involves full locking.It affets performace because of locking.

    You can test using

    public class TransactionBehaviour {
       // set is either using xml Or annotation
        DataSourceTransactionManager manager=new DataSourceTransactionManager();
        SimpleTransactionStatus status=new SimpleTransactionStatus();
       ;
    
    
        public void beginTransaction()
        {
            DefaultTransactionDefinition Def = new DefaultTransactionDefinition();
            // overwrite default PROPAGATION_REQUIRED and ISOLATION_DEFAULT
            // set is either using xml Or annotation
            manager.setPropagationBehavior(XX);
            manager.setIsolationLevelName(XX);
    
            status = manager.getTransaction(Def);
    
        }
    
        public void commitTransaction()
        {
    
    
                if(status.isCompleted()){
                    manager.commit(status);
            } 
        }
    
        public void rollbackTransaction()
        {
    
                if(!status.isCompleted()){
                    manager.rollback(status);
            }
        }
        Main method{
            beginTransaction()
            M1();
            If error(){
                rollbackTransaction()
            }
             commitTransaction();
        }
    
    }
    

    You can debug and see the result with different values for isolation and propagation.


    Enough explanation about each parameter is given by other answers; However you asked for a real world example, here is the one that clarifies the purpose of different propagation options:

    Suppose you're in charge of implementing a signup service in which a confirmation e-mail is sent to the user. You come up with two service objects, one for enrolling the user and one for sending e-mails, which the latter is called inside the first one. For example something like this:

    /* Sign Up service */
    @Service
    @Transactional(Propagation=REQUIRED)
    class SignUpService{
     ...
     void SignUp(User user){
        ...
        emailService.sendMail(User);
     }
    }
    
    /* E-Mail Service */
    @Service
    @Transactional(Propagation=REQUIRES_NEW)
    class EmailService{
     ...
     void sendMail(User user){
      try{
         ... // Trying to send the e-mail
      }catch( Exception)
     }
    }
    

    You may have noticed that the second service is of propagation type REQUIRES_NEW and moreover chances are it throws an exception (SMTP server down ,invalid e-mail or other reasons).You probably don't want the whole process to roll-back, like removing the user information from database or other things; therefore you call the second service in a separate transaction.

    Back to our example, this time you are concerned about the database security, so you define your DAO classes this way:

    /* User DAO */
    @Transactional(Propagation=MANDATORY)
    class UserDAO{
     // some CRUD methods
    }
    

    Meaning that whenever a DAO object, and hence a potential access to db, is created, we need to reassure that the call was made from inside one of our services, implying that a live transaction should exist; otherwise an exception occurs.Therefore the propagation is of type MANDATORY .

    链接地址: http://www.djcxy.com/p/30136.html

    上一篇: 工具(用户空间LXC工具)?

    下一篇: 隔离,传播