c20230537's blog

By c20230537, history, 3 days ago, In English

This problem can use a root complexity. It's $$$O(n \sqrt n)$$$. But someone memorized it using unordered_map. Actually, we can build $$$ \log n $$$ chains with $$$ \frac {n} {\log n} $$$ points per chain, and query the bottom of the chain in pairs each time, so that the spatial and temporal complexity is $$$\frac {n ^ 2} {\log n}$$$. It shouldn't get accepted.

However, this code with unordered_map got accepted. Atleast we shouldn't let the brute force to get Accepted, right?

 
 
 
 
  • Vote: I like it
  • +6
  • Vote: I do not like it

»
3 days ago, # |
  Vote: I like it +3 Vote: I do not like it

Because 3 seconds TL is just too much for $$$O(n \sqrt n)$$$ solution where $$$n \le 100000$$$.

It would be much better if TL was 1.5 sec, for example.

»
3 days ago, # |
Rev. 3   Vote: I like it +11 Vote: I do not like it

If there are $$$k$$$ chains with length $$$n \over k$$$, the overall complexity would be $$$O(min(q,{k*(k-1) \over 2})*{n \over k})$$$. The maximum of it is $$$O({\sqrt n}*n)$$$

My solution with memorization passed in 1s. I could probably remove hashmap to make it run even faster

  • »
    »
    3 days ago, # ^ |
    Rev. 2   Vote: I like it -11 Vote: I do not like it

    We can choose $$$17$$$ bamboos of $$$5882$$$ length approximately. $$$q=10^5$$$. It can be $$$17 \times 5882 \times \frac{10^5}{17} = 5.882 \times 10^8$$$. why $$$O(n \sqrt n)$$$? Can you please explain that?

    • »
      »
      »
      3 days ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don't understand your formula. Why do you multiply it by 5882?

      • »
        »
        »
        »
        3 days ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        We can select the bottom of the bamboo for each query of a different pair. That means you have to record the entire length of bamboo for $$$5882$$$, right?

        • »
          »
          »
          »
          »
          3 days ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Yes, but we have only $$$17^2=289$$$ pairs of vertices we need to consider. Each pair has $$$5882$$$ different depths, so it's only $$$1.6*10^6$$$ values

          • »
            »
            »
            »
            »
            »
            3 days ago, # ^ |
            Rev. 2   Vote: I like it 0 Vote: I do not like it

            I see. But the $$$O(n \sqrt n)$$$ with map will be $$$O(n \sqrt n \log(n \sqrt n))$$$, and if we use the unordered_map, we probably get WA or TLE?

            • »
              »
              »
              »
              »
              »
              »
              3 days ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Unordered_map wouldn't give us WA. However, TLE would be possible.

              In my submission I used gp_hash_table with a custom hash function, which was much faster than map. But that's not the only optimization. I was getting MLE for some time, so I memorized the answer for all depths that are multiples of 10. This made the trick.

»
3 days ago, # |
  Vote: I like it +3 Vote: I do not like it

2994 ms. It's too close...