博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Java并发包--线程池原理
阅读量:7082 次
发布时间:2019-06-28

本文共 127274 字,大约阅读时间需要 424 分钟。

转载请注明出处:

 

线程池示例

在分析线程池之前,先看一个简单的线程池示例。

1 import java.util.concurrent.Executors; 2 import java.util.concurrent.ExecutorService; 3  4 public class ThreadPoolDemo1 { 5  6     public static void main(String[] args) { 7         // 创建一个可重用固定线程数的线程池 8         ExecutorService pool = Executors.newFixedThreadPool(2); 9         // 创建实现了Runnable接口对象,Thread对象当然也实现了Runnable接口10         Thread ta = new MyThread();11         Thread tb = new MyThread();12         Thread tc = new MyThread();13         Thread td = new MyThread();14         Thread te = new MyThread();15         // 将线程放入池中进行执行16         pool.execute(ta);17         pool.execute(tb);18         pool.execute(tc);19         pool.execute(td);20         pool.execute(te);21         // 关闭线程池22         pool.shutdown();23     }24 }25 26 class MyThread extends Thread {27 28     @Override29     public void run() {30         System.out.println(Thread.currentThread().getName()+ " is running.");31     }32 }

运行结果

pool-1-thread-1 is running.pool-1-thread-2 is running.pool-1-thread-1 is running.pool-1-thread-2 is running.pool-1-thread-1 is running.

示例中,包括了线程池的创建,将任务添加到线程池中,关闭线程池这3个主要的步骤。稍后,我们会从这3个方面来分析ThreadPoolExecutor。

 

参考代码(基于JDK1.7.0_40)

Executors完整源码

1 /*  2  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.  3  *  4  *  5  *  6  *  7  *  8  *  9  * 10  * 11  * 12  * 13  * 14  * 15  * 16  * 17  * 18  * 19  * 20  * 21  * 22  * 23  */ 24  25 /* 26  * 27  * 28  * 29  * 30  * 31  * Written by Doug Lea with assistance from members of JCP JSR-166 32  * Expert Group and released to the public domain, as explained at 33  * http://creativecommons.org/publicdomain/zero/1.0/ 34  */ 35  36 package java.util.concurrent; 37 import java.util.*; 38 import java.util.concurrent.atomic.AtomicInteger; 39 import java.security.AccessControlContext; 40 import java.security.AccessController; 41 import java.security.PrivilegedAction; 42 import java.security.PrivilegedExceptionAction; 43 import java.security.PrivilegedActionException; 44 import java.security.AccessControlException; 45 import sun.security.util.SecurityConstants; 46  47 /** 48  * Factory and utility methods for {@link Executor}, {@link 49  * ExecutorService}, {@link ScheduledExecutorService}, {@link 50  * ThreadFactory}, and {@link Callable} classes defined in this 51  * package. This class supports the following kinds of methods: 52  * 53  * 
    54 *
  • Methods that create and return an {@link ExecutorService} 55 * set up with commonly useful configuration settings. 56 *
  • Methods that create and return a {@link ScheduledExecutorService} 57 * set up with commonly useful configuration settings. 58 *
  • Methods that create and return a "wrapped" ExecutorService, that 59 * disables reconfiguration by making implementation-specific methods 60 * inaccessible. 61 *
  • Methods that create and return a {@link ThreadFactory} 62 * that sets newly created threads to a known state. 63 *
  • Methods that create and return a {@link Callable} 64 * out of other closure-like forms, so they can be used 65 * in execution methods requiring Callable. 66 *
67 * 68 * @since 1.5 69 * @author Doug Lea 70 */ 71 public class Executors { 72 73 /** 74 * Creates a thread pool that reuses a fixed number of threads 75 * operating off a shared unbounded queue. At any point, at most 76 * nThreads threads will be active processing tasks. 77 * If additional tasks are submitted when all threads are active, 78 * they will wait in the queue until a thread is available. 79 * If any thread terminates due to a failure during execution 80 * prior to shutdown, a new one will take its place if needed to 81 * execute subsequent tasks. The threads in the pool will exist 82 * until it is explicitly {@link ExecutorService#shutdown shutdown}. 83 * 84 * @param nThreads the number of threads in the pool 85 * @return the newly created thread pool 86 * @throws IllegalArgumentException if {@code nThreads <= 0} 87 */ 88 public static ExecutorService newFixedThreadPool(int nThreads) { 89 return new ThreadPoolExecutor(nThreads, nThreads, 90 0L, TimeUnit.MILLISECONDS, 91 new LinkedBlockingQueue
()); 92 } 93 94 /** 95 * Creates a thread pool that reuses a fixed number of threads 96 * operating off a shared unbounded queue, using the provided 97 * ThreadFactory to create new threads when needed. At any point, 98 * at most
nThreads threads will be active processing 99 * tasks. If additional tasks are submitted when all threads are100 * active, they will wait in the queue until a thread is101 * available. If any thread terminates due to a failure during102 * execution prior to shutdown, a new one will take its place if103 * needed to execute subsequent tasks. The threads in the pool will104 * exist until it is explicitly {@link ExecutorService#shutdown105 * shutdown}.106 *107 * @param nThreads the number of threads in the pool108 * @param threadFactory the factory to use when creating new threads109 * @return the newly created thread pool110 * @throws NullPointerException if threadFactory is null111 * @throws IllegalArgumentException if {@code nThreads <= 0}112 */113 public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {114 return new ThreadPoolExecutor(nThreads, nThreads,115 0L, TimeUnit.MILLISECONDS,116 new LinkedBlockingQueue
(),117 threadFactory);118 }119 120 /**121 * Creates an Executor that uses a single worker thread operating122 * off an unbounded queue. (Note however that if this single123 * thread terminates due to a failure during execution prior to124 * shutdown, a new one will take its place if needed to execute125 * subsequent tasks.) Tasks are guaranteed to execute126 * sequentially, and no more than one task will be active at any127 * given time. Unlike the otherwise equivalent128 *
newFixedThreadPool(1) the returned executor is129 * guaranteed not to be reconfigurable to use additional threads.130 *131 * @return the newly created single-threaded Executor132 */133 public static ExecutorService newSingleThreadExecutor() {134 return new FinalizableDelegatedExecutorService135 (new ThreadPoolExecutor(1, 1,136 0L, TimeUnit.MILLISECONDS,137 new LinkedBlockingQueue
()));138 }139 140 /**141 * Creates an Executor that uses a single worker thread operating142 * off an unbounded queue, and uses the provided ThreadFactory to143 * create a new thread when needed. Unlike the otherwise144 * equivalent
newFixedThreadPool(1, threadFactory) the145 * returned executor is guaranteed not to be reconfigurable to use146 * additional threads.147 *148 * @param threadFactory the factory to use when creating new149 * threads150 *151 * @return the newly created single-threaded Executor152 * @throws NullPointerException if threadFactory is null153 */154 public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {155 return new FinalizableDelegatedExecutorService156 (new ThreadPoolExecutor(1, 1,157 0L, TimeUnit.MILLISECONDS,158 new LinkedBlockingQueue
(),159 threadFactory));160 }161 162 /**163 * Creates a thread pool that creates new threads as needed, but164 * will reuse previously constructed threads when they are165 * available. These pools will typically improve the performance166 * of programs that execute many short-lived asynchronous tasks.167 * Calls to
execute will reuse previously constructed168 * threads if available. If no existing thread is available, a new169 * thread will be created and added to the pool. Threads that have170 * not been used for sixty seconds are terminated and removed from171 * the cache. Thus, a pool that remains idle for long enough will172 * not consume any resources. Note that pools with similar173 * properties but different details (for example, timeout parameters)174 * may be created using {@link ThreadPoolExecutor} constructors.175 *176 * @return the newly created thread pool177 */178 public static ExecutorService newCachedThreadPool() {179 return new ThreadPoolExecutor(0, Integer.MAX_VALUE,180 60L, TimeUnit.SECONDS,181 new SynchronousQueue
());182 }183 184 /**185 * Creates a thread pool that creates new threads as needed, but186 * will reuse previously constructed threads when they are187 * available, and uses the provided188 * ThreadFactory to create new threads when needed.189 * @param threadFactory the factory to use when creating new threads190 * @return the newly created thread pool191 * @throws NullPointerException if threadFactory is null192 */193 public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {194 return new ThreadPoolExecutor(0, Integer.MAX_VALUE,195 60L, TimeUnit.SECONDS,196 new SynchronousQueue
(),197 threadFactory);198 }199 200 /**201 * Creates a single-threaded executor that can schedule commands202 * to run after a given delay, or to execute periodically.203 * (Note however that if this single204 * thread terminates due to a failure during execution prior to205 * shutdown, a new one will take its place if needed to execute206 * subsequent tasks.) Tasks are guaranteed to execute207 * sequentially, and no more than one task will be active at any208 * given time. Unlike the otherwise equivalent209 *
newScheduledThreadPool(1) the returned executor is210 * guaranteed not to be reconfigurable to use additional threads.211 * @return the newly created scheduled executor212 */213 public static ScheduledExecutorService newSingleThreadScheduledExecutor() {214 return new DelegatedScheduledExecutorService215 (new ScheduledThreadPoolExecutor(1));216 }217 218 /**219 * Creates a single-threaded executor that can schedule commands220 * to run after a given delay, or to execute periodically. (Note221 * however that if this single thread terminates due to a failure222 * during execution prior to shutdown, a new one will take its223 * place if needed to execute subsequent tasks.) Tasks are224 * guaranteed to execute sequentially, and no more than one task225 * will be active at any given time. Unlike the otherwise226 * equivalent
newScheduledThreadPool(1, threadFactory)227 * the returned executor is guaranteed not to be reconfigurable to228 * use additional threads.229 * @param threadFactory the factory to use when creating new230 * threads231 * @return a newly created scheduled executor232 * @throws NullPointerException if threadFactory is null233 */234 public static ScheduledExecutorService newSingleThreadScheduledExecutor(ThreadFactory threadFactory) {235 return new DelegatedScheduledExecutorService236 (new ScheduledThreadPoolExecutor(1, threadFactory));237 }238 239 /**240 * Creates a thread pool that can schedule commands to run after a241 * given delay, or to execute periodically.242 * @param corePoolSize the number of threads to keep in the pool,243 * even if they are idle.244 * @return a newly created scheduled thread pool245 * @throws IllegalArgumentException if {@code corePoolSize < 0}246 */247 public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {248 return new ScheduledThreadPoolExecutor(corePoolSize);249 }250 251 /**252 * Creates a thread pool that can schedule commands to run after a253 * given delay, or to execute periodically.254 * @param corePoolSize the number of threads to keep in the pool,255 * even if they are idle.256 * @param threadFactory the factory to use when the executor257 * creates a new thread.258 * @return a newly created scheduled thread pool259 * @throws IllegalArgumentException if {@code corePoolSize < 0}260 * @throws NullPointerException if threadFactory is null261 */262 public static ScheduledExecutorService newScheduledThreadPool(263 int corePoolSize, ThreadFactory threadFactory) {264 return new ScheduledThreadPoolExecutor(corePoolSize, threadFactory);265 }266 267 268 /**269 * Returns an object that delegates all defined {@link270 * ExecutorService} methods to the given executor, but not any271 * other methods that might otherwise be accessible using272 * casts. This provides a way to safely "freeze" configuration and273 * disallow tuning of a given concrete implementation.274 * @param executor the underlying implementation275 * @return an
ExecutorService instance276 * @throws NullPointerException if executor null277 */278 public static ExecutorService unconfigurableExecutorService(ExecutorService executor) {279 if (executor == null)280 throw new NullPointerException();281 return new DelegatedExecutorService(executor);282 }283 284 /**285 * Returns an object that delegates all defined {@link286 * ScheduledExecutorService} methods to the given executor, but287 * not any other methods that might otherwise be accessible using288 * casts. This provides a way to safely "freeze" configuration and289 * disallow tuning of a given concrete implementation.290 * @param executor the underlying implementation291 * @return a
ScheduledExecutorService instance292 * @throws NullPointerException if executor null293 */294 public static ScheduledExecutorService unconfigurableScheduledExecutorService(ScheduledExecutorService executor) {295 if (executor == null)296 throw new NullPointerException();297 return new DelegatedScheduledExecutorService(executor);298 }299 300 /**301 * Returns a default thread factory used to create new threads.302 * This factory creates all new threads used by an Executor in the303 * same {@link ThreadGroup}. If there is a {@link304 * java.lang.SecurityManager}, it uses the group of {@link305 * System#getSecurityManager}, else the group of the thread306 * invoking this
defaultThreadFactory method. Each new307 * thread is created as a non-daemon thread with priority set to308 * the smaller of
Thread.NORM_PRIORITY and the maximum309 * priority permitted in the thread group. New threads have names310 * accessible via {@link Thread#getName} of311 *
pool-N-thread-M, where
N is the sequence312 * number of this factory, and
M is the sequence number313 * of the thread created by this factory.314 * @return a thread factory315 */316 public static ThreadFactory defaultThreadFactory() {317 return new DefaultThreadFactory();318 }319 320 /**321 * Returns a thread factory used to create new threads that322 * have the same permissions as the current thread.323 * This factory creates threads with the same settings as {@link324 * Executors#defaultThreadFactory}, additionally setting the325 * AccessControlContext and contextClassLoader of new threads to326 * be the same as the thread invoking this327 *
privilegedThreadFactory method. A new328 *
privilegedThreadFactory can be created within an329 * {@link AccessController#doPrivileged} action setting the330 * current thread's access control context to create threads with331 * the selected permission settings holding within that action.332 *333 *

Note that while tasks running within such threads will have334 * the same access control and class loader settings as the335 * current thread, they need not have the same {@link336 * java.lang.ThreadLocal} or {@link337 * java.lang.InheritableThreadLocal} values. If necessary,338 * particular values of thread locals can be set or reset before339 * any task runs in {@link ThreadPoolExecutor} subclasses using340 * {@link ThreadPoolExecutor#beforeExecute}. Also, if it is341 * necessary to initialize worker threads to have the same342 * InheritableThreadLocal settings as some other designated343 * thread, you can create a custom ThreadFactory in which that344 * thread waits for and services requests to create others that345 * will inherit its values.346 *347 * @return a thread factory348 * @throws AccessControlException if the current access control349 * context does not have permission to both get and set context350 * class loader.351 */352 public static ThreadFactory privilegedThreadFactory() {353 return new PrivilegedThreadFactory();354 }355 356 /**357 * Returns a {@link Callable} object that, when358 * called, runs the given task and returns the given result. This359 * can be useful when applying methods requiring a360 * Callable to an otherwise resultless action.361 * @param task the task to run362 * @param result the result to return363 * @return a callable object364 * @throws NullPointerException if task null365 */366 public static

Callable
callable(Runnable task, T result) {367 if (task == null)368 throw new NullPointerException();369 return new RunnableAdapter
(task, result);370 }371 372 /**373 * Returns a {@link Callable} object that, when374 * called, runs the given task and returns
null.375 * @param task the task to run376 * @return a callable object377 * @throws NullPointerException if task null378 */379 public static Callable
callable(Runnable task) {380 if (task == null)381 throw new NullPointerException();382 return new RunnableAdapter(task, null);383 }384 385 /**386 * Returns a {@link Callable} object that, when387 * called, runs the given privileged action and returns its result.388 * @param action the privileged action to run389 * @return a callable object390 * @throws NullPointerException if action null391 */392 public static Callable callable(final PrivilegedAction
action) {393 if (action == null)394 throw new NullPointerException();395 return new Callable() {396 public Object call() { return action.run(); }};397 }398 399 /**400 * Returns a {@link Callable} object that, when401 * called, runs the given privileged exception action and returns402 * its result.403 * @param action the privileged exception action to run404 * @return a callable object405 * @throws NullPointerException if action null406 */407 public static Callable callable(final PrivilegedExceptionAction
action) {408 if (action == null)409 throw new NullPointerException();410 return new Callable() {411 public Object call() throws Exception { return action.run(); }};412 }413 414 /**415 * Returns a {@link Callable} object that will, when416 * called, execute the given callable under the current417 * access control context. This method should normally be418 * invoked within an {@link AccessController#doPrivileged} action419 * to create callables that will, if possible, execute under the420 * selected permission settings holding within that action; or if421 * not possible, throw an associated {@link422 * AccessControlException}.423 * @param callable the underlying task424 * @return a callable object425 * @throws NullPointerException if callable null426 *427 */428 public static
Callable
privilegedCallable(Callable
callable) {429 if (callable == null)430 throw new NullPointerException();431 return new PrivilegedCallable
(callable);432 }433 434 /**435 * Returns a {@link Callable} object that will, when436 * called, execute the given
callable under the current437 * access control context, with the current context class loader438 * as the context class loader. This method should normally be439 * invoked within an {@link AccessController#doPrivileged} action440 * to create callables that will, if possible, execute under the441 * selected permission settings holding within that action; or if442 * not possible, throw an associated {@link443 * AccessControlException}.444 * @param callable the underlying task445 *446 * @return a callable object447 * @throws NullPointerException if callable null448 * @throws AccessControlException if the current access control449 * context does not have permission to both set and get context450 * class loader.451 */452 public static
Callable
privilegedCallableUsingCurrentClassLoader(Callable
callable) {453 if (callable == null)454 throw new NullPointerException();455 return new PrivilegedCallableUsingCurrentClassLoader
(callable);456 }457 458 // Non-public classes supporting the public methods459 460 /**461 * A callable that runs given task and returns given result462 */463 static final class RunnableAdapter
implements Callable
{464 final Runnable task;465 final T result;466 RunnableAdapter(Runnable task, T result) {467 this.task = task;468 this.result = result;469 }470 public T call() {471 task.run();472 return result;473 }474 }475 476 /**477 * A callable that runs under established access control settings478 */479 static final class PrivilegedCallable
implements Callable
{480 private final Callable
task;481 private final AccessControlContext acc;482 483 PrivilegedCallable(Callable
task) {484 this.task = task;485 this.acc = AccessController.getContext();486 }487 488 public T call() throws Exception {489 try {490 return AccessController.doPrivileged(491 new PrivilegedExceptionAction
() {492 public T run() throws Exception {493 return task.call();494 }495 }, acc);496 } catch (PrivilegedActionException e) {497 throw e.getException();498 }499 }500 }501 502 /**503 * A callable that runs under established access control settings and504 * current ClassLoader505 */506 static final class PrivilegedCallableUsingCurrentClassLoader
implements Callable
{507 private final Callable
task;508 private final AccessControlContext acc;509 private final ClassLoader ccl;510 511 PrivilegedCallableUsingCurrentClassLoader(Callable
task) {512 SecurityManager sm = System.getSecurityManager();513 if (sm != null) {514 // Calls to getContextClassLoader from this class515 // never trigger a security check, but we check516 // whether our callers have this permission anyways.517 sm.checkPermission(SecurityConstants.GET_CLASSLOADER_PERMISSION);518 519 // Whether setContextClassLoader turns out to be necessary520 // or not, we fail fast if permission is not available.521 sm.checkPermission(new RuntimePermission("setContextClassLoader"));522 }523 this.task = task;524 this.acc = AccessController.getContext();525 this.ccl = Thread.currentThread().getContextClassLoader();526 }527 528 public T call() throws Exception {529 try {530 return AccessController.doPrivileged(531 new PrivilegedExceptionAction
() {532 public T run() throws Exception {533 Thread t = Thread.currentThread();534 ClassLoader cl = t.getContextClassLoader();535 if (ccl == cl) {536 return task.call();537 } else {538 t.setContextClassLoader(ccl);539 try {540 return task.call();541 } finally {542 t.setContextClassLoader(cl);543 }544 }545 }546 }, acc);547 } catch (PrivilegedActionException e) {548 throw e.getException();549 }550 }551 }552 553 /**554 * The default thread factory555 */556 static class DefaultThreadFactory implements ThreadFactory {557 private static final AtomicInteger poolNumber = new AtomicInteger(1);558 private final ThreadGroup group;559 private final AtomicInteger threadNumber = new AtomicInteger(1);560 private final String namePrefix;561 562 DefaultThreadFactory() {563 SecurityManager s = System.getSecurityManager();564 group = (s != null) ? s.getThreadGroup() :565 Thread.currentThread().getThreadGroup();566 namePrefix = "pool-" +567 poolNumber.getAndIncrement() +568 "-thread-";569 }570 571 public Thread newThread(Runnable r) {572 Thread t = new Thread(group, r,573 namePrefix + threadNumber.getAndIncrement(),574 0);575 if (t.isDaemon())576 t.setDaemon(false);577 if (t.getPriority() != Thread.NORM_PRIORITY)578 t.setPriority(Thread.NORM_PRIORITY);579 return t;580 }581 }582 583 /**584 * Thread factory capturing access control context and class loader585 */586 static class PrivilegedThreadFactory extends DefaultThreadFactory {587 private final AccessControlContext acc;588 private final ClassLoader ccl;589 590 PrivilegedThreadFactory() {591 super();592 SecurityManager sm = System.getSecurityManager();593 if (sm != null) {594 // Calls to getContextClassLoader from this class595 // never trigger a security check, but we check596 // whether our callers have this permission anyways.597 sm.checkPermission(SecurityConstants.GET_CLASSLOADER_PERMISSION);598 599 // Fail fast600 sm.checkPermission(new RuntimePermission("setContextClassLoader"));601 }602 this.acc = AccessController.getContext();603 this.ccl = Thread.currentThread().getContextClassLoader();604 }605 606 public Thread newThread(final Runnable r) {607 return super.newThread(new Runnable() {608 public void run() {609 AccessController.doPrivileged(new PrivilegedAction
() {610 public Void run() {611 Thread.currentThread().setContextClassLoader(ccl);612 r.run();613 return null;614 }615 }, acc);616 }617 });618 }619 }620 621 /**622 * A wrapper class that exposes only the ExecutorService methods623 * of an ExecutorService implementation.624 */625 static class DelegatedExecutorService extends AbstractExecutorService {626 private final ExecutorService e;627 DelegatedExecutorService(ExecutorService executor) { e = executor; }628 public void execute(Runnable command) { e.execute(command); }629 public void shutdown() { e.shutdown(); }630 public List
shutdownNow() { return e.shutdownNow(); }631 public boolean isShutdown() { return e.isShutdown(); }632 public boolean isTerminated() { return e.isTerminated(); }633 public boolean awaitTermination(long timeout, TimeUnit unit)634 throws InterruptedException {635 return e.awaitTermination(timeout, unit);636 }637 public Future
submit(Runnable task) {638 return e.submit(task);639 }640 public
Future
submit(Callable
task) {641 return e.submit(task);642 }643 public
Future
submit(Runnable task, T result) {644 return e.submit(task, result);645 }646 public
List
> invokeAll(Collection
> tasks)647 throws InterruptedException {648 return e.invokeAll(tasks);649 }650 public
List
> invokeAll(Collection
> tasks,651 long timeout, TimeUnit unit)652 throws InterruptedException {653 return e.invokeAll(tasks, timeout, unit);654 }655 public
T invokeAny(Collection
> tasks)656 throws InterruptedException, ExecutionException {657 return e.invokeAny(tasks);658 }659 public
T invokeAny(Collection
> tasks,660 long timeout, TimeUnit unit)661 throws InterruptedException, ExecutionException, TimeoutException {662 return e.invokeAny(tasks, timeout, unit);663 }664 }665 666 static class FinalizableDelegatedExecutorService667 extends DelegatedExecutorService {668 FinalizableDelegatedExecutorService(ExecutorService executor) {669 super(executor);670 }671 protected void finalize() {672 super.shutdown();673 }674 }675 676 /**677 * A wrapper class that exposes only the ScheduledExecutorService678 * methods of a ScheduledExecutorService implementation.679 */680 static class DelegatedScheduledExecutorService681 extends DelegatedExecutorService682 implements ScheduledExecutorService {683 private final ScheduledExecutorService e;684 DelegatedScheduledExecutorService(ScheduledExecutorService executor) {685 super(executor);686 e = executor;687 }688 public ScheduledFuture
schedule(Runnable command, long delay, TimeUnit unit) {689 return e.schedule(command, delay, unit);690 }691 public
ScheduledFuture
schedule(Callable
callable, long delay, TimeUnit unit) {692 return e.schedule(callable, delay, unit);693 }694 public ScheduledFuture
scheduleAtFixedRate(Runnable command, long initialDelay, long period, TimeUnit unit) {695 return e.scheduleAtFixedRate(command, initialDelay, period, unit);696 }697 public ScheduledFuture
scheduleWithFixedDelay(Runnable command, long initialDelay, long delay, TimeUnit unit) {698 return e.scheduleWithFixedDelay(command, initialDelay, delay, unit);699 }700 }701 702 703 /** Cannot instantiate. */704 private Executors() {}705 }

 

ThreadPoolExecutor完整源码

1 /*   2  * ORACLE PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.   3  *   4  *   5  *   6  *   7  *   8  *   9  *  10  *  11  *  12  *  13  *  14  *  15  *  16  *  17  *  18  *  19  *  20  *  21  *  22  *  23  */  24   25 /*  26  *  27  *  28  *  29  *  30  *  31  * Written by Doug Lea with assistance from members of JCP JSR-166  32  * Expert Group and released to the public domain, as explained at  33  * http://creativecommons.org/publicdomain/zero/1.0/  34  */  35   36 package java.util.concurrent;  37 import java.util.concurrent.locks.AbstractQueuedSynchronizer;  38 import java.util.concurrent.locks.Condition;  39 import java.util.concurrent.locks.ReentrantLock;  40 import java.util.concurrent.atomic.AtomicInteger;  41 import java.util.*;  42   43 /**  44  * An {@link ExecutorService} that executes each submitted task using  45  * one of possibly several pooled threads, normally configured  46  * using {@link Executors} factory methods.  47  *  48  * 

Thread pools address two different problems: they usually 49 * provide improved performance when executing large numbers of 50 * asynchronous tasks, due to reduced per-task invocation overhead, 51 * and they provide a means of bounding and managing the resources, 52 * including threads, consumed when executing a collection of tasks. 53 * Each {@code ThreadPoolExecutor} also maintains some basic 54 * statistics, such as the number of completed tasks. 55 * 56 *

To be useful across a wide range of contexts, this class 57 * provides many adjustable parameters and extensibility 58 * hooks. However, programmers are urged to use the more convenient 59 * {@link Executors} factory methods {@link 60 * Executors#newCachedThreadPool} (unbounded thread pool, with 61 * automatic thread reclamation), {@link Executors#newFixedThreadPool} 62 * (fixed size thread pool) and {@link 63 * Executors#newSingleThreadExecutor} (single background thread), that 64 * preconfigure settings for the most common usage 65 * scenarios. Otherwise, use the following guide when manually 66 * configuring and tuning this class: 67 * 68 *

69 * 70 *
Core and maximum pool sizes
71 * 72 *
A {@code ThreadPoolExecutor} will automatically adjust the 73 * pool size (see {@link #getPoolSize}) 74 * according to the bounds set by 75 * corePoolSize (see {@link #getCorePoolSize}) and 76 * maximumPoolSize (see {@link #getMaximumPoolSize}). 77 * 78 * When a new task is submitted in method {@link #execute}, and fewer 79 * than corePoolSize threads are running, a new thread is created to 80 * handle the request, even if other worker threads are idle. If 81 * there are more than corePoolSize but less than maximumPoolSize 82 * threads running, a new thread will be created only if the queue is 83 * full. By setting corePoolSize and maximumPoolSize the same, you 84 * create a fixed-size thread pool. By setting maximumPoolSize to an 85 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you 86 * allow the pool to accommodate an arbitrary number of concurrent 87 * tasks. Most typically, core and maximum pool sizes are set only 88 * upon construction, but they may also be changed dynamically using 89 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}.
90 * 91 *
On-demand construction
92 * 93 *
By default, even core threads are initially created and 94 * started only when new tasks arrive, but this can be overridden 95 * dynamically using method {@link #prestartCoreThread} or {@link 96 * #prestartAllCoreThreads}. You probably want to prestart threads if 97 * you construct the pool with a non-empty queue.
98 * 99 *
Creating new threads
100 * 101 *
New threads are created using a {@link ThreadFactory}. If not 102 * otherwise specified, a {@link Executors#defaultThreadFactory} is 103 * used, that creates threads to all be in the same {@link 104 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and 105 * non-daemon status. By supplying a different ThreadFactory, you can 106 * alter the thread's name, thread group, priority, daemon status, 107 * etc. If a {@code ThreadFactory} fails to create a thread when asked 108 * by returning null from {@code newThread}, the executor will 109 * continue, but might not be able to execute any tasks. Threads 110 * should possess the "modifyThread" {@code RuntimePermission}. If 111 * worker threads or other threads using the pool do not possess this 112 * permission, service may be degraded: configuration changes may not 113 * take effect in a timely manner, and a shutdown pool may remain in a 114 * state in which termination is possible but not completed.
115 * 116 *
Keep-alive times
117 * 118 *
If the pool currently has more than corePoolSize threads, 119 * excess threads will be terminated if they have been idle for more 120 * than the keepAliveTime (see {@link #getKeepAliveTime}). This 121 * provides a means of reducing resource consumption when the pool is 122 * not being actively used. If the pool becomes more active later, new 123 * threads will be constructed. This parameter can also be changed 124 * dynamically using method {@link #setKeepAliveTime}. Using a value 125 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively 126 * disables idle threads from ever terminating prior to shut down. By 127 * default, the keep-alive policy applies only when there are more 128 * than corePoolSizeThreads. But method {@link 129 * #allowCoreThreadTimeOut(boolean)} can be used to apply this 130 * time-out policy to core threads as well, so long as the 131 * keepAliveTime value is non-zero.
132 * 133 *
Queuing
134 * 135 *
Any {@link BlockingQueue} may be used to transfer and hold 136 * submitted tasks. The use of this queue interacts with pool sizing: 137 * 138 *
    139 * 140 *
  • If fewer than corePoolSize threads are running, the Executor 141 * always prefers adding a new thread 142 * rather than queuing.
  • 143 * 144 *
  • If corePoolSize or more threads are running, the Executor 145 * always prefers queuing a request rather than adding a new 146 * thread.
  • 147 * 148 *
  • If a request cannot be queued, a new thread is created unless 149 * this would exceed maximumPoolSize, in which case, the task will be 150 * rejected.
  • 151 * 152 *
153 * 154 * There are three general strategies for queuing: 155 *
    156 * 157 *
  1. Direct handoffs. A good default choice for a work 158 * queue is a {@link SynchronousQueue} that hands off tasks to threads 159 * without otherwise holding them. Here, an attempt to queue a task 160 * will fail if no threads are immediately available to run it, so a 161 * new thread will be constructed. This policy avoids lockups when 162 * handling sets of requests that might have internal dependencies. 163 * Direct handoffs generally require unbounded maximumPoolSizes to 164 * avoid rejection of new submitted tasks. This in turn admits the 165 * possibility of unbounded thread growth when commands continue to 166 * arrive on average faster than they can be processed.
  2. 167 * 168 *
  3. Unbounded queues. Using an unbounded queue (for 169 * example a {@link LinkedBlockingQueue} without a predefined 170 * capacity) will cause new tasks to wait in the queue when all 171 * corePoolSize threads are busy. Thus, no more than corePoolSize 172 * threads will ever be created. (And the value of the maximumPoolSize 173 * therefore doesn't have any effect.) This may be appropriate when 174 * each task is completely independent of others, so tasks cannot 175 * affect each others execution; for example, in a web page server. 176 * While this style of queuing can be useful in smoothing out 177 * transient bursts of requests, it admits the possibility of 178 * unbounded work queue growth when commands continue to arrive on 179 * average faster than they can be processed.
  4. 180 * 181 *
  5. Bounded queues. A bounded queue (for example, an 182 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when 183 * used with finite maximumPoolSizes, but can be more difficult to 184 * tune and control. Queue sizes and maximum pool sizes may be traded 185 * off for each other: Using large queues and small pools minimizes 186 * CPU usage, OS resources, and context-switching overhead, but can 187 * lead to artificially low throughput. If tasks frequently block (for 188 * example if they are I/O bound), a system may be able to schedule 189 * time for more threads than you otherwise allow. Use of small queues 190 * generally requires larger pool sizes, which keeps CPUs busier but 191 * may encounter unacceptable scheduling overhead, which also 192 * decreases throughput.
  6. 193 * 194 *
195 * 196 *
197 * 198 *
Rejected tasks
199 * 200 *
New tasks submitted in method {@link #execute} will be 201 *
rejected when the Executor has been shut down, and also 202 * when the Executor uses finite bounds for both maximum threads and 203 * work queue capacity, and is saturated. In either case, the {@code 204 * execute} method invokes the {@link 205 * RejectedExecutionHandler#rejectedExecution} method of its {@link 206 * RejectedExecutionHandler}. Four predefined handler policies are 207 * provided: 208 * 209 *
    210 * 211 *
  1. In the default {@link ThreadPoolExecutor.AbortPolicy}, the 212 * handler throws a runtime {@link RejectedExecutionException} upon 213 * rejection.
  2. 214 * 215 *
  3. In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread 216 * that invokes {@code execute} itself runs the task. This provides a 217 * simple feedback control mechanism that will slow down the rate that 218 * new tasks are submitted.
  4. 219 * 220 *
  5. In {@link ThreadPoolExecutor.DiscardPolicy}, a task that 221 * cannot be executed is simply dropped.
  6. 222 * 223 *
  7. In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the 224 * executor is not shut down, the task at the head of the work queue 225 * is dropped, and then execution is retried (which can fail again, 226 * causing this to be repeated.)
  8. 227 * 228 *
229 * 230 * It is possible to define and use other kinds of {@link 231 * RejectedExecutionHandler} classes. Doing so requires some care 232 * especially when policies are designed to work only under particular 233 * capacity or queuing policies.
234 * 235 *
Hook methods
236 * 237 *
This class provides {@code protected} overridable {@link 238 * #beforeExecute} and {@link #afterExecute} methods that are called 239 * before and after execution of each task. These can be used to 240 * manipulate the execution environment; for example, reinitializing 241 * ThreadLocals, gathering statistics, or adding log 242 * entries. Additionally, method {@link #terminated} can be overridden 243 * to perform any special processing that needs to be done once the 244 * Executor has fully terminated. 245 * 246 *

If hook or callback methods throw exceptions, internal worker 247 * threads may in turn fail and abruptly terminate.

248 * 249 *
Queue maintenance
250 * 251 *
Method {@link #getQueue} allows access to the work queue for 252 * purposes of monitoring and debugging. Use of this method for any 253 * other purpose is strongly discouraged. Two supplied methods, 254 * {@link #remove} and {@link #purge} are available to assist in 255 * storage reclamation when large numbers of queued tasks become 256 * cancelled.
257 * 258 *
Finalization
259 * 260 *
A pool that is no longer referenced in a program
AND 261 * has no remaining threads will be {@code shutdown} automatically. If 262 * you would like to ensure that unreferenced pools are reclaimed even 263 * if users forget to call {@link #shutdown}, then you must arrange 264 * that unused threads eventually die, by setting appropriate 265 * keep-alive times, using a lower bound of zero core threads and/or 266 * setting {@link #allowCoreThreadTimeOut(boolean)}.
267 * 268 *
269 * 270 *

Extension example. Most extensions of this class 271 * override one or more of the protected hook methods. For example, 272 * here is a subclass that adds a simple pause/resume feature: 273 * 274 *

 {@code 275  * class PausableThreadPoolExecutor extends ThreadPoolExecutor { 276  *   private boolean isPaused; 277  *   private ReentrantLock pauseLock = new ReentrantLock(); 278  *   private Condition unpaused = pauseLock.newCondition(); 279  * 280  *   public PausableThreadPoolExecutor(...) { super(...); } 281  * 282  *   protected void beforeExecute(Thread t, Runnable r) { 283  *     super.beforeExecute(t, r); 284  *     pauseLock.lock(); 285  *     try { 286  *       while (isPaused) unpaused.await(); 287  *     } catch (InterruptedException ie) { 288  *       t.interrupt(); 289  *     } finally { 290  *       pauseLock.unlock(); 291  *     } 292  *   } 293  * 294  *   public void pause() { 295  *     pauseLock.lock(); 296  *     try { 297  *       isPaused = true; 298  *     } finally { 299  *       pauseLock.unlock(); 300  *     } 301  *   } 302  * 303  *   public void resume() { 304  *     pauseLock.lock(); 305  *     try { 306  *       isPaused = false; 307  *       unpaused.signalAll(); 308  *     } finally { 309  *       pauseLock.unlock(); 310  *     } 311  *   } 312  * }}
313 * 314 * @since 1.5 315 * @author Doug Lea 316 */ 317 public class ThreadPoolExecutor extends AbstractExecutorService { 318 /** 319 * The main pool control state, ctl, is an atomic integer packing 320 * two conceptual fields 321 * workerCount, indicating the effective number of threads 322 * runState, indicating whether running, shutting down etc 323 * 324 * In order to pack them into one int, we limit workerCount to 325 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2 326 * billion) otherwise representable. If this is ever an issue in 327 * the future, the variable can be changed to be an AtomicLong, 328 * and the shift/mask constants below adjusted. But until the need 329 * arises, this code is a bit faster and simpler using an int. 330 * 331 * The workerCount is the number of workers that have been 332 * permitted to start and not permitted to stop. The value may be 333 * transiently different from the actual number of live threads, 334 * for example when a ThreadFactory fails to create a thread when 335 * asked, and when exiting threads are still performing 336 * bookkeeping before terminating. The user-visible pool size is 337 * reported as the current size of the workers set. 338 * 339 * The runState provides the main lifecyle control, taking on values: 340 * 341 * RUNNING: Accept new tasks and process queued tasks 342 * SHUTDOWN: Don't accept new tasks, but process queued tasks 343 * STOP: Don't accept new tasks, don't process queued tasks, 344 * and interrupt in-progress tasks 345 * TIDYING: All tasks have terminated, workerCount is zero, 346 * the thread transitioning to state TIDYING 347 * will run the terminated() hook method 348 * TERMINATED: terminated() has completed 349 * 350 * The numerical order among these values matters, to allow 351 * ordered comparisons. The runState monotonically increases over 352 * time, but need not hit each state. The transitions are: 353 * 354 * RUNNING -> SHUTDOWN 355 * On invocation of shutdown(), perhaps implicitly in finalize() 356 * (RUNNING or SHUTDOWN) -> STOP 357 * On invocation of shutdownNow() 358 * SHUTDOWN -> TIDYING 359 * When both queue and pool are empty 360 * STOP -> TIDYING 361 * When pool is empty 362 * TIDYING -> TERMINATED 363 * When the terminated() hook method has completed 364 * 365 * Threads waiting in awaitTermination() will return when the 366 * state reaches TERMINATED. 367 * 368 * Detecting the transition from SHUTDOWN to TIDYING is less 369 * straightforward than you'd like because the queue may become 370 * empty after non-empty and vice versa during SHUTDOWN state, but 371 * we can only terminate if, after seeing that it is empty, we see 372 * that workerCount is 0 (which sometimes entails a recheck -- see 373 * below). 374 */ 375 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0)); 376 private static final int COUNT_BITS = Integer.SIZE - 3; 377 private static final int CAPACITY = (1 << COUNT_BITS) - 1; 378 379 // runState is stored in the high-order bits 380 private static final int RUNNING = -1 << COUNT_BITS; 381 private static final int SHUTDOWN = 0 << COUNT_BITS; 382 private static final int STOP = 1 << COUNT_BITS; 383 private static final int TIDYING = 2 << COUNT_BITS; 384 private static final int TERMINATED = 3 << COUNT_BITS; 385 386 // Packing and unpacking ctl 387 private static int runStateOf(int c) { return c & ~CAPACITY; } 388 private static int workerCountOf(int c) { return c & CAPACITY; } 389 private static int ctlOf(int rs, int wc) { return rs | wc; } 390 391 /* 392 * Bit field accessors that don't require unpacking ctl. 393 * These depend on the bit layout and on workerCount being never negative. 394 */ 395 396 private static boolean runStateLessThan(int c, int s) { 397 return c < s; 398 } 399 400 private static boolean runStateAtLeast(int c, int s) { 401 return c >= s; 402 } 403 404 private static boolean isRunning(int c) { 405 return c < SHUTDOWN; 406 } 407 408 /** 409 * Attempt to CAS-increment the workerCount field of ctl. 410 */ 411 private boolean compareAndIncrementWorkerCount(int expect) { 412 return ctl.compareAndSet(expect, expect + 1); 413 } 414 415 /** 416 * Attempt to CAS-decrement the workerCount field of ctl. 417 */ 418 private boolean compareAndDecrementWorkerCount(int expect) { 419 return ctl.compareAndSet(expect, expect - 1); 420 } 421 422 /** 423 * Decrements the workerCount field of ctl. This is called only on 424 * abrupt termination of a thread (see processWorkerExit). Other 425 * decrements are performed within getTask. 426 */ 427 private void decrementWorkerCount() { 428 do {} while (! compareAndDecrementWorkerCount(ctl.get())); 429 } 430 431 /** 432 * The queue used for holding tasks and handing off to worker 433 * threads. We do not require that workQueue.poll() returning 434 * null necessarily means that workQueue.isEmpty(), so rely 435 * solely on isEmpty to see if the queue is empty (which we must 436 * do for example when deciding whether to transition from 437 * SHUTDOWN to TIDYING). This accommodates special-purpose 438 * queues such as DelayQueues for which poll() is allowed to 439 * return null even if it may later return non-null when delays 440 * expire. 441 */ 442 private final BlockingQueue
workQueue; 443 444 /** 445 * Lock held on access to workers set and related bookkeeping. 446 * While we could use a concurrent set of some sort, it turns out 447 * to be generally preferable to use a lock. Among the reasons is 448 * that this serializes interruptIdleWorkers, which avoids 449 * unnecessary interrupt storms, especially during shutdown. 450 * Otherwise exiting threads would concurrently interrupt those 451 * that have not yet interrupted. It also simplifies some of the 452 * associated statistics bookkeeping of largestPoolSize etc. We 453 * also hold mainLock on shutdown and shutdownNow, for the sake of 454 * ensuring workers set is stable while separately checking 455 * permission to interrupt and actually interrupting. 456 */ 457 private final ReentrantLock mainLock = new ReentrantLock(); 458 459 /** 460 * Set containing all worker threads in pool. Accessed only when 461 * holding mainLock. 462 */ 463 private final HashSet
workers = new HashSet
(); 464 465 /** 466 * Wait condition to support awaitTermination 467 */ 468 private final Condition termination = mainLock.newCondition(); 469 470 /** 471 * Tracks largest attained pool size. Accessed only under 472 * mainLock. 473 */ 474 private int largestPoolSize; 475 476 /** 477 * Counter for completed tasks. Updated only on termination of 478 * worker threads. Accessed only under mainLock. 479 */ 480 private long completedTaskCount; 481 482 /* 483 * All user control parameters are declared as volatiles so that 484 * ongoing actions are based on freshest values, but without need 485 * for locking, since no internal invariants depend on them 486 * changing synchronously with respect to other actions. 487 */ 488 489 /** 490 * Factory for new threads. All threads are created using this 491 * factory (via method addWorker). All callers must be prepared 492 * for addWorker to fail, which may reflect a system or user's 493 * policy limiting the number of threads. Even though it is not 494 * treated as an error, failure to create threads may result in 495 * new tasks being rejected or existing ones remaining stuck in 496 * the queue. 497 * 498 * We go further and preserve pool invariants even in the face of 499 * errors such as OutOfMemoryError, that might be thrown while 500 * trying to create threads. Such errors are rather common due to 501 * the need to allocate a native stack in Thread#start, and users 502 * will want to perform clean pool shutdown to clean up. There 503 * will likely be enough memory available for the cleanup code to 504 * complete without encountering yet another OutOfMemoryError. 505 */ 506 private volatile ThreadFactory threadFactory; 507 508 /** 509 * Handler called when saturated or shutdown in execute. 510 */ 511 private volatile RejectedExecutionHandler handler; 512 513 /** 514 * Timeout in nanoseconds for idle threads waiting for work. 515 * Threads use this timeout when there are more than corePoolSize 516 * present or if allowCoreThreadTimeOut. Otherwise they wait 517 * forever for new work. 518 */ 519 private volatile long keepAliveTime; 520 521 /** 522 * If false (default), core threads stay alive even when idle. 523 * If true, core threads use keepAliveTime to time out waiting 524 * for work. 525 */ 526 private volatile boolean allowCoreThreadTimeOut; 527 528 /** 529 * Core pool size is the minimum number of workers to keep alive 530 * (and not allow to time out etc) unless allowCoreThreadTimeOut 531 * is set, in which case the minimum is zero. 532 */ 533 private volatile int corePoolSize; 534 535 /** 536 * Maximum pool size. Note that the actual maximum is internally 537 * bounded by CAPACITY. 538 */ 539 private volatile int maximumPoolSize; 540 541 /** 542 * The default rejected execution handler 543 */ 544 private static final RejectedExecutionHandler defaultHandler = 545 new AbortPolicy(); 546 547 /** 548 * Permission required for callers of shutdown and shutdownNow. 549 * We additionally require (see checkShutdownAccess) that callers 550 * have permission to actually interrupt threads in the worker set 551 * (as governed by Thread.interrupt, which relies on 552 * ThreadGroup.checkAccess, which in turn relies on 553 * SecurityManager.checkAccess). Shutdowns are attempted only if 554 * these checks pass. 555 * 556 * All actual invocations of Thread.interrupt (see 557 * interruptIdleWorkers and interruptWorkers) ignore 558 * SecurityExceptions, meaning that the attempted interrupts 559 * silently fail. In the case of shutdown, they should not fail 560 * unless the SecurityManager has inconsistent policies, sometimes 561 * allowing access to a thread and sometimes not. In such cases, 562 * failure to actually interrupt threads may disable or delay full 563 * termination. Other uses of interruptIdleWorkers are advisory, 564 * and failure to actually interrupt will merely delay response to 565 * configuration changes so is not handled exceptionally. 566 */ 567 private static final RuntimePermission shutdownPerm = 568 new RuntimePermission("modifyThread"); 569 570 /** 571 * Class Worker mainly maintains interrupt control state for 572 * threads running tasks, along with other minor bookkeeping. 573 * This class opportunistically extends AbstractQueuedSynchronizer 574 * to simplify acquiring and releasing a lock surrounding each 575 * task execution. This protects against interrupts that are 576 * intended to wake up a worker thread waiting for a task from 577 * instead interrupting a task being run. We implement a simple 578 * non-reentrant mutual exclusion lock rather than use 579 * ReentrantLock because we do not want worker tasks to be able to 580 * reacquire the lock when they invoke pool control methods like 581 * setCorePoolSize. Additionally, to suppress interrupts until 582 * the thread actually starts running tasks, we initialize lock 583 * state to a negative value, and clear it upon start (in 584 * runWorker). 585 */ 586 private final class Worker 587 extends AbstractQueuedSynchronizer 588 implements Runnable 589 { 590 /** 591 * This class will never be serialized, but we provide a 592 * serialVersionUID to suppress a javac warning. 593 */ 594 private static final long serialVersionUID = 6138294804551838833L; 595 596 /** Thread this worker is running in. Null if factory fails. */ 597 final Thread thread; 598 /** Initial task to run. Possibly null. */ 599 Runnable firstTask; 600 /** Per-thread task counter */ 601 volatile long completedTasks; 602 603 /** 604 * Creates with given first task and thread from ThreadFactory. 605 * @param firstTask the first task (null if none) 606 */ 607 Worker(Runnable firstTask) { 608 setState(-1); // inhibit interrupts until runWorker 609 this.firstTask = firstTask; 610 this.thread = getThreadFactory().newThread(this); 611 } 612 613 /** Delegates main run loop to outer runWorker */ 614 public void run() { 615 runWorker(this); 616 } 617 618 // Lock methods 619 // 620 // The value 0 represents the unlocked state. 621 // The value 1 represents the locked state. 622 623 protected boolean isHeldExclusively() { 624 return getState() != 0; 625 } 626 627 protected boolean tryAcquire(int unused) { 628 if (compareAndSetState(0, 1)) { 629 setExclusiveOwnerThread(Thread.currentThread()); 630 return true; 631 } 632 return false; 633 } 634 635 protected boolean tryRelease(int unused) { 636 setExclusiveOwnerThread(null); 637 setState(0); 638 return true; 639 } 640 641 public void lock() { acquire(1); } 642 public boolean tryLock() { return tryAcquire(1); } 643 public void unlock() { release(1); } 644 public boolean isLocked() { return isHeldExclusively(); } 645 646 void interruptIfStarted() { 647 Thread t; 648 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) { 649 try { 650 t.interrupt(); 651 } catch (SecurityException ignore) { 652 } 653 } 654 } 655 } 656 657 /* 658 * Methods for setting control state 659 */ 660 661 /** 662 * Transitions runState to given target, or leaves it alone if 663 * already at least the given target. 664 * 665 * @param targetState the desired state, either SHUTDOWN or STOP 666 * (but not TIDYING or TERMINATED -- use tryTerminate for that) 667 */ 668 private void advanceRunState(int targetState) { 669 for (;;) { 670 int c = ctl.get(); 671 if (runStateAtLeast(c, targetState) || 672 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c)))) 673 break; 674 } 675 } 676 677 /** 678 * Transitions to TERMINATED state if either (SHUTDOWN and pool 679 * and queue empty) or (STOP and pool empty). If otherwise 680 * eligible to terminate but workerCount is nonzero, interrupts an 681 * idle worker to ensure that shutdown signals propagate. This 682 * method must be called following any action that might make 683 * termination possible -- reducing worker count or removing tasks 684 * from the queue during shutdown. The method is non-private to 685 * allow access from ScheduledThreadPoolExecutor. 686 */ 687 final void tryTerminate() { 688 for (;;) { 689 int c = ctl.get(); 690 if (isRunning(c) || 691 runStateAtLeast(c, TIDYING) || 692 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty())) 693 return; 694 if (workerCountOf(c) != 0) { // Eligible to terminate 695 interruptIdleWorkers(ONLY_ONE); 696 return; 697 } 698 699 final ReentrantLock mainLock = this.mainLock; 700 mainLock.lock(); 701 try { 702 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) { 703 try { 704 terminated(); 705 } finally { 706 ctl.set(ctlOf(TERMINATED, 0)); 707 termination.signalAll(); 708 } 709 return; 710 } 711 } finally { 712 mainLock.unlock(); 713 } 714 // else retry on failed CAS 715 } 716 } 717 718 /* 719 * Methods for controlling interrupts to worker threads. 720 */ 721 722 /** 723 * If there is a security manager, makes sure caller has 724 * permission to shut down threads in general (see shutdownPerm). 725 * If this passes, additionally makes sure the caller is allowed 726 * to interrupt each worker thread. This might not be true even if 727 * first check passed, if the SecurityManager treats some threads 728 * specially. 729 */ 730 private void checkShutdownAccess() { 731 SecurityManager security = System.getSecurityManager(); 732 if (security != null) { 733 security.checkPermission(shutdownPerm); 734 final ReentrantLock mainLock = this.mainLock; 735 mainLock.lock(); 736 try { 737 for (Worker w : workers) 738 security.checkAccess(w.thread); 739 } finally { 740 mainLock.unlock(); 741 } 742 } 743 } 744 745 /** 746 * Interrupts all threads, even if active. Ignores SecurityExceptions 747 * (in which case some threads may remain uninterrupted). 748 */ 749 private void interruptWorkers() { 750 final ReentrantLock mainLock = this.mainLock; 751 mainLock.lock(); 752 try { 753 for (Worker w : workers) 754 w.interruptIfStarted(); 755 } finally { 756 mainLock.unlock(); 757 } 758 } 759 760 /** 761 * Interrupts threads that might be waiting for tasks (as 762 * indicated by not being locked) so they can check for 763 * termination or configuration changes. Ignores 764 * SecurityExceptions (in which case some threads may remain 765 * uninterrupted). 766 * 767 * @param onlyOne If true, interrupt at most one worker. This is 768 * called only from tryTerminate when termination is otherwise 769 * enabled but there are still other workers. In this case, at 770 * most one waiting worker is interrupted to propagate shutdown 771 * signals in case all threads are currently waiting. 772 * Interrupting any arbitrary thread ensures that newly arriving 773 * workers since shutdown began will also eventually exit. 774 * To guarantee eventual termination, it suffices to always 775 * interrupt only one idle worker, but shutdown() interrupts all 776 * idle workers so that redundant workers exit promptly, not 777 * waiting for a straggler task to finish. 778 */ 779 private void interruptIdleWorkers(boolean onlyOne) { 780 final ReentrantLock mainLock = this.mainLock; 781 mainLock.lock(); 782 try { 783 for (Worker w : workers) { 784 Thread t = w.thread; 785 if (!t.isInterrupted() && w.tryLock()) { 786 try { 787 t.interrupt(); 788 } catch (SecurityException ignore) { 789 } finally { 790 w.unlock(); 791 } 792 } 793 if (onlyOne) 794 break; 795 } 796 } finally { 797 mainLock.unlock(); 798 } 799 } 800 801 /** 802 * Common form of interruptIdleWorkers, to avoid having to 803 * remember what the boolean argument means. 804 */ 805 private void interruptIdleWorkers() { 806 interruptIdleWorkers(false); 807 } 808 809 private static final boolean ONLY_ONE = true; 810 811 /* 812 * Misc utilities, most of which are also exported to 813 * ScheduledThreadPoolExecutor 814 */ 815 816 /** 817 * Invokes the rejected execution handler for the given command. 818 * Package-protected for use by ScheduledThreadPoolExecutor. 819 */ 820 final void reject(Runnable command) { 821 handler.rejectedExecution(command, this); 822 } 823 824 /** 825 * Performs any further cleanup following run state transition on 826 * invocation of shutdown. A no-op here, but used by 827 * ScheduledThreadPoolExecutor to cancel delayed tasks. 828 */ 829 void onShutdown() { 830 } 831 832 /** 833 * State check needed by ScheduledThreadPoolExecutor to 834 * enable running tasks during shutdown. 835 * 836 * @param shutdownOK true if should return true if SHUTDOWN 837 */ 838 final boolean isRunningOrShutdown(boolean shutdownOK) { 839 int rs = runStateOf(ctl.get()); 840 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK); 841 } 842 843 /** 844 * Drains the task queue into a new list, normally using 845 * drainTo. But if the queue is a DelayQueue or any other kind of 846 * queue for which poll or drainTo may fail to remove some 847 * elements, it deletes them one by one. 848 */ 849 private List
drainQueue() { 850 BlockingQueue
q = workQueue; 851 List
taskList = new ArrayList
(); 852 q.drainTo(taskList); 853 if (!q.isEmpty()) { 854 for (Runnable r : q.toArray(new Runnable[0])) { 855 if (q.remove(r)) 856 taskList.add(r); 857 } 858 } 859 return taskList; 860 } 861 862 /* 863 * Methods for creating, running and cleaning up after workers 864 */ 865 866 /** 867 * Checks if a new worker can be added with respect to current 868 * pool state and the given bound (either core or maximum). If so, 869 * the worker count is adjusted accordingly, and, if possible, a 870 * new worker is created and started, running firstTask as its 871 * first task. This method returns false if the pool is stopped or 872 * eligible to shut down. It also returns false if the thread 873 * factory fails to create a thread when asked. If the thread 874 * creation fails, either due to the thread factory returning 875 * null, or due to an exception (typically OutOfMemoryError in 876 * Thread#start), we roll back cleanly. 877 * 878 * @param firstTask the task the new thread should run first (or 879 * null if none). Workers are created with an initial first task 880 * (in method execute()) to bypass queuing when there are fewer 881 * than corePoolSize threads (in which case we always start one), 882 * or when the queue is full (in which case we must bypass queue). 883 * Initially idle threads are usually created via 884 * prestartCoreThread or to replace other dying workers. 885 * 886 * @param core if true use corePoolSize as bound, else 887 * maximumPoolSize. (A boolean indicator is used here rather than a 888 * value to ensure reads of fresh values after checking other pool 889 * state). 890 * @return true if successful 891 */ 892 private boolean addWorker(Runnable firstTask, boolean core) { 893 retry: 894 for (;;) { 895 int c = ctl.get(); 896 int rs = runStateOf(c); 897 898 // Check if queue empty only if necessary. 899 if (rs >= SHUTDOWN && 900 ! (rs == SHUTDOWN && 901 firstTask == null && 902 ! workQueue.isEmpty())) 903 return false; 904 905 for (;;) { 906 int wc = workerCountOf(c); 907 if (wc >= CAPACITY || 908 wc >= (core ? corePoolSize : maximumPoolSize)) 909 return false; 910 if (compareAndIncrementWorkerCount(c)) 911 break retry; 912 c = ctl.get(); // Re-read ctl 913 if (runStateOf(c) != rs) 914 continue retry; 915 // else CAS failed due to workerCount change; retry inner loop 916 } 917 } 918 919 boolean workerStarted = false; 920 boolean workerAdded = false; 921 Worker w = null; 922 try { 923 final ReentrantLock mainLock = this.mainLock; 924 w = new Worker(firstTask); 925 final Thread t = w.thread; 926 if (t != null) { 927 mainLock.lock(); 928 try { 929 // Recheck while holding lock. 930 // Back out on ThreadFactory failure or if 931 // shut down before lock acquired. 932 int c = ctl.get(); 933 int rs = runStateOf(c); 934 935 if (rs < SHUTDOWN || 936 (rs == SHUTDOWN && firstTask == null)) { 937 if (t.isAlive()) // precheck that t is startable 938 throw new IllegalThreadStateException(); 939 workers.add(w); 940 int s = workers.size(); 941 if (s > largestPoolSize) 942 largestPoolSize = s; 943 workerAdded = true; 944 } 945 } finally { 946 mainLock.unlock(); 947 } 948 if (workerAdded) { 949 t.start(); 950 workerStarted = true; 951 } 952 } 953 } finally { 954 if (! workerStarted) 955 addWorkerFailed(w); 956 } 957 return workerStarted; 958 } 959 960 /** 961 * Rolls back the worker thread creation. 962 * - removes worker from workers, if present 963 * - decrements worker count 964 * - rechecks for termination, in case the existence of this 965 * worker was holding up termination 966 */ 967 private void addWorkerFailed(Worker w) { 968 final ReentrantLock mainLock = this.mainLock; 969 mainLock.lock(); 970 try { 971 if (w != null) 972 workers.remove(w); 973 decrementWorkerCount(); 974 tryTerminate(); 975 } finally { 976 mainLock.unlock(); 977 } 978 } 979 980 /** 981 * Performs cleanup and bookkeeping for a dying worker. Called 982 * only from worker threads. Unless completedAbruptly is set, 983 * assumes that workerCount has already been adjusted to account 984 * for exit. This method removes thread from worker set, and 985 * possibly terminates the pool or replaces the worker if either 986 * it exited due to user task exception or if fewer than 987 * corePoolSize workers are running or queue is non-empty but 988 * there are no workers. 989 * 990 * @param w the worker 991 * @param completedAbruptly if the worker died due to user exception 992 */ 993 private void processWorkerExit(Worker w, boolean completedAbruptly) { 994 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted 995 decrementWorkerCount(); 996 997 final ReentrantLock mainLock = this.mainLock; 998 mainLock.lock(); 999 try {1000 completedTaskCount += w.completedTasks;1001 workers.remove(w);1002 } finally {1003 mainLock.unlock();1004 }1005 1006 tryTerminate();1007 1008 int c = ctl.get();1009 if (runStateLessThan(c, STOP)) {1010 if (!completedAbruptly) {1011 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;1012 if (min == 0 && ! workQueue.isEmpty())1013 min = 1;1014 if (workerCountOf(c) >= min)1015 return; // replacement not needed1016 }1017 addWorker(null, false);1018 }1019 }1020 1021 /**1022 * Performs blocking or timed wait for a task, depending on1023 * current configuration settings, or returns null if this worker1024 * must exit because of any of:1025 * 1. There are more than maximumPoolSize workers (due to1026 * a call to setMaximumPoolSize).1027 * 2. The pool is stopped.1028 * 3. The pool is shutdown and the queue is empty.1029 * 4. This worker timed out waiting for a task, and timed-out1030 * workers are subject to termination (that is,1031 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})1032 * both before and after the timed wait.1033 *1034 * @return task, or null if the worker must exit, in which case1035 * workerCount is decremented1036 */1037 private Runnable getTask() {1038 boolean timedOut = false; // Did the last poll() time out?1039 1040 retry:1041 for (;;) {1042 int c = ctl.get();1043 int rs = runStateOf(c);1044 1045 // Check if queue empty only if necessary.1046 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {1047 decrementWorkerCount();1048 return null;1049 }1050 1051 boolean timed; // Are workers subject to culling?1052 1053 for (;;) {1054 int wc = workerCountOf(c);1055 timed = allowCoreThreadTimeOut || wc > corePoolSize;1056 1057 if (wc <= maximumPoolSize && ! (timedOut && timed))1058 break;1059 if (compareAndDecrementWorkerCount(c))1060 return null;1061 c = ctl.get(); // Re-read ctl1062 if (runStateOf(c) != rs)1063 continue retry;1064 // else CAS failed due to workerCount change; retry inner loop1065 }1066 1067 try {1068 Runnable r = timed ?1069 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :1070 workQueue.take();1071 if (r != null)1072 return r;1073 timedOut = true;1074 } catch (InterruptedException retry) {1075 timedOut = false;1076 }1077 }1078 }1079 1080 /**1081 * Main worker run loop. Repeatedly gets tasks from queue and1082 * executes them, while coping with a number of issues:1083 *1084 * 1. We may start out with an initial task, in which case we1085 * don't need to get the first one. Otherwise, as long as pool is1086 * running, we get tasks from getTask. If it returns null then the1087 * worker exits due to changed pool state or configuration1088 * parameters. Other exits result from exception throws in1089 * external code, in which case completedAbruptly holds, which1090 * usually leads processWorkerExit to replace this thread.1091 *1092 * 2. Before running any task, the lock is acquired to prevent1093 * other pool interrupts while the task is executing, and1094 * clearInterruptsForTaskRun called to ensure that unless pool is1095 * stopping, this thread does not have its interrupt set.1096 *1097 * 3. Each task run is preceded by a call to beforeExecute, which1098 * might throw an exception, in which case we cause thread to die1099 * (breaking loop with completedAbruptly true) without processing1100 * the task.1101 *1102 * 4. Assuming beforeExecute completes normally, we run the task,1103 * gathering any of its thrown exceptions to send to1104 * afterExecute. We separately handle RuntimeException, Error1105 * (both of which the specs guarantee that we trap) and arbitrary1106 * Throwables. Because we cannot rethrow Throwables within1107 * Runnable.run, we wrap them within Errors on the way out (to the1108 * thread's UncaughtExceptionHandler). Any thrown exception also1109 * conservatively causes thread to die.1110 *1111 * 5. After task.run completes, we call afterExecute, which may1112 * also throw an exception, which will also cause thread to1113 * die. According to JLS Sec 14.20, this exception is the one that1114 * will be in effect even if task.run throws.1115 *1116 * The net effect of the exception mechanics is that afterExecute1117 * and the thread's UncaughtExceptionHandler have as accurate1118 * information as we can provide about any problems encountered by1119 * user code.1120 *1121 * @param w the worker1122 */1123 final void runWorker(Worker w) {1124 Thread wt = Thread.currentThread();1125 Runnable task = w.firstTask;1126 w.firstTask = null;1127 w.unlock(); // allow interrupts1128 boolean completedAbruptly = true;1129 try {1130 while (task != null || (task = getTask()) != null) {1131 w.lock();1132 // If pool is stopping, ensure thread is interrupted;1133 // if not, ensure thread is not interrupted. This1134 // requires a recheck in second case to deal with1135 // shutdownNow race while clearing interrupt1136 if ((runStateAtLeast(ctl.get(), STOP) ||1137 (Thread.interrupted() &&1138 runStateAtLeast(ctl.get(), STOP))) &&1139 !wt.isInterrupted())1140 wt.interrupt();1141 try {1142 beforeExecute(wt, task);1143 Throwable thrown = null;1144 try {1145 task.run();1146 } catch (RuntimeException x) {1147 thrown = x; throw x;1148 } catch (Error x) {1149 thrown = x; throw x;1150 } catch (Throwable x) {1151 thrown = x; throw new Error(x);1152 } finally {1153 afterExecute(task, thrown);1154 }1155 } finally {1156 task = null;1157 w.completedTasks++;1158 w.unlock();1159 }1160 }1161 completedAbruptly = false;1162 } finally {1163 processWorkerExit(w, completedAbruptly);1164 }1165 }1166 1167 // Public constructors and methods1168 1169 /**1170 * Creates a new {@code ThreadPoolExecutor} with the given initial1171 * parameters and default thread factory and rejected execution handler.1172 * It may be more convenient to use one of the {@link Executors} factory1173 * methods instead of this general purpose constructor.1174 *1175 * @param corePoolSize the number of threads to keep in the pool, even1176 * if they are idle, unless {@code allowCoreThreadTimeOut} is set1177 * @param maximumPoolSize the maximum number of threads to allow in the1178 * pool1179 * @param keepAliveTime when the number of threads is greater than1180 * the core, this is the maximum time that excess idle threads1181 * will wait for new tasks before terminating.1182 * @param unit the time unit for the {@code keepAliveTime} argument1183 * @param workQueue the queue to use for holding tasks before they are1184 * executed. This queue will hold only the {@code Runnable}1185 * tasks submitted by the {@code execute} method.1186 * @throws IllegalArgumentException if one of the following holds:
1187 * {@code corePoolSize < 0}
1188 * {@code keepAliveTime < 0}
1189 * {@code maximumPoolSize <= 0}
1190 * {@code maximumPoolSize < corePoolSize}1191 * @throws NullPointerException if {@code workQueue} is null1192 */1193 public ThreadPoolExecutor(int corePoolSize,1194 int maximumPoolSize,1195 long keepAliveTime,1196 TimeUnit unit,1197 BlockingQueue
workQueue) {1198 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,1199 Executors.defaultThreadFactory(), defaultHandler);1200 }1201 1202 /**1203 * Creates a new {@code ThreadPoolExecutor} with the given initial1204 * parameters and default rejected execution handler.1205 *1206 * @param corePoolSize the number of threads to keep in the pool, even1207 * if they are idle, unless {@code allowCoreThreadTimeOut} is set1208 * @param maximumPoolSize the maximum number of threads to allow in the1209 * pool1210 * @param keepAliveTime when the number of threads is greater than1211 * the core, this is the maximum time that excess idle threads1212 * will wait for new tasks before terminating.1213 * @param unit the time unit for the {@code keepAliveTime} argument1214 * @param workQueue the queue to use for holding tasks before they are1215 * executed. This queue will hold only the {@code Runnable}1216 * tasks submitted by the {@code execute} method.1217 * @param threadFactory the factory to use when the executor1218 * creates a new thread1219 * @throws IllegalArgumentException if one of the following holds:
1220 * {@code corePoolSize < 0}
1221 * {@code keepAliveTime < 0}
1222 * {@code maximumPoolSize <= 0}
1223 * {@code maximumPoolSize < corePoolSize}1224 * @throws NullPointerException if {@code workQueue}1225 * or {@code threadFactory} is null1226 */1227 public ThreadPoolExecutor(int corePoolSize,1228 int maximumPoolSize,1229 long keepAliveTime,1230 TimeUnit unit,1231 BlockingQueue
workQueue,1232 ThreadFactory threadFactory) {1233 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,1234 threadFactory, defaultHandler);1235 }1236 1237 /**1238 * Creates a new {@code ThreadPoolExecutor} with the given initial1239 * parameters and default thread factory.1240 *1241 * @param corePoolSize the number of threads to keep in the pool, even1242 * if they are idle, unless {@code allowCoreThreadTimeOut} is set1243 * @param maximumPoolSize the maximum number of threads to allow in the1244 * pool1245 * @param keepAliveTime when the number of threads is greater than1246 * the core, this is the maximum time that excess idle threads1247 * will wait for new tasks before terminating.1248 * @param unit the time unit for the {@code keepAliveTime} argument1249 * @param workQueue the queue to use for holding tasks before they are1250 * executed. This queue will hold only the {@code Runnable}1251 * tasks submitted by the {@code execute} method.1252 * @param handler the handler to use when execution is blocked1253 * because the thread bounds and queue capacities are reached1254 * @throws IllegalArgumentException if one of the following holds:
1255 * {@code corePoolSize < 0}
1256 * {@code keepAliveTime < 0}
1257 * {@code maximumPoolSize <= 0}
1258 * {@code maximumPoolSize < corePoolSize}1259 * @throws NullPointerException if {@code workQueue}1260 * or {@code handler} is null1261 */1262 public ThreadPoolExecutor(int corePoolSize,1263 int maximumPoolSize,1264 long keepAliveTime,1265 TimeUnit unit,1266 BlockingQueue
workQueue,1267 RejectedExecutionHandler handler) {1268 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,1269 Executors.defaultThreadFactory(), handler);1270 }1271 1272 /**1273 * Creates a new {@code ThreadPoolExecutor} with the given initial1274 * parameters.1275 *1276 * @param corePoolSize the number of threads to keep in the pool, even1277 * if they are idle, unless {@code allowCoreThreadTimeOut} is set1278 * @param maximumPoolSize the maximum number of threads to allow in the1279 * pool1280 * @param keepAliveTime when the number of threads is greater than1281 * the core, this is the maximum time that excess idle threads1282 * will wait for new tasks before terminating.1283 * @param unit the time unit for the {@code keepAliveTime} argument1284 * @param workQueue the queue to use for holding tasks before they are1285 * executed. This queue will hold only the {@code Runnable}1286 * tasks submitted by the {@code execute} method.1287 * @param threadFactory the factory to use when the executor1288 * creates a new thread1289 * @param handler the handler to use when execution is blocked1290 * because the thread bounds and queue capacities are reached1291 * @throws IllegalArgumentException if one of the following holds:
1292 * {@code corePoolSize < 0}
1293 * {@code keepAliveTime < 0}
1294 * {@code maximumPoolSize <= 0}
1295 * {@code maximumPoolSize < corePoolSize}1296 * @throws NullPointerException if {@code workQueue}1297 * or {@code threadFactory} or {@code handler} is null1298 */1299 public ThreadPoolExecutor(int corePoolSize,1300 int maximumPoolSize,1301 long keepAliveTime,1302 TimeUnit unit,1303 BlockingQueue
workQueue,1304 ThreadFactory threadFactory,1305 RejectedExecutionHandler handler) {1306 if (corePoolSize < 0 ||1307 maximumPoolSize <= 0 ||1308 maximumPoolSize < corePoolSize ||1309 keepAliveTime < 0)1310 throw new IllegalArgumentException();1311 if (workQueue == null || threadFactory == null || handler == null)1312 throw new NullPointerException();1313 this.corePoolSize = corePoolSize;1314 this.maximumPoolSize = maximumPoolSize;1315 this.workQueue = workQueue;1316 this.keepAliveTime = unit.toNanos(keepAliveTime);1317 this.threadFactory = threadFactory;1318 this.handler = handler;1319 }1320 1321 /**1322 * Executes the given task sometime in the future. The task1323 * may execute in a new thread or in an existing pooled thread.1324 *1325 * If the task cannot be submitted for execution, either because this1326 * executor has been shutdown or because its capacity has been reached,1327 * the task is handled by the current {@code RejectedExecutionHandler}.1328 *1329 * @param command the task to execute1330 * @throws RejectedExecutionException at discretion of1331 * {@code RejectedExecutionHandler}, if the task1332 * cannot be accepted for execution1333 * @throws NullPointerException if {@code command} is null1334 */1335 public void execute(Runnable command) {1336 if (command == null)1337 throw new NullPointerException();1338 /*1339 * Proceed in 3 steps:1340 *1341 * 1. If fewer than corePoolSize threads are running, try to1342 * start a new thread with the given command as its first1343 * task. The call to addWorker atomically checks runState and1344 * workerCount, and so prevents false alarms that would add1345 * threads when it shouldn't, by returning false.1346 *1347 * 2. If a task can be successfully queued, then we still need1348 * to double-check whether we should have added a thread1349 * (because existing ones died since last checking) or that1350 * the pool shut down since entry into this method. So we1351 * recheck state and if necessary roll back the enqueuing if1352 * stopped, or start a new thread if there are none.1353 *1354 * 3. If we cannot queue task, then we try to add a new1355 * thread. If it fails, we know we are shut down or saturated1356 * and so reject the task.1357 */1358 int c = ctl.get();1359 if (workerCountOf(c) < corePoolSize) {1360 if (addWorker(command, true))1361 return;1362 c = ctl.get();1363 }1364 if (isRunning(c) && workQueue.offer(command)) {1365 int recheck = ctl.get();1366 if (! isRunning(recheck) && remove(command))1367 reject(command);1368 else if (workerCountOf(recheck) == 0)1369 addWorker(null, false);1370 }1371 else if (!addWorker(command, false))1372 reject(command);1373 }1374 1375 /**1376 * Initiates an orderly shutdown in which previously submitted1377 * tasks are executed, but no new tasks will be accepted.1378 * Invocation has no additional effect if already shut down.1379 *1380 *

This method does not wait for previously submitted tasks to1381 * complete execution. Use {@link #awaitTermination awaitTermination}1382 * to do that.1383 *1384 * @throws SecurityException {@inheritDoc}1385 */1386 public void shutdown() {1387 final ReentrantLock mainLock = this.mainLock;1388 mainLock.lock();1389 try {1390 checkShutdownAccess();1391 advanceRunState(SHUTDOWN);1392 interruptIdleWorkers();1393 onShutdown(); // hook for ScheduledThreadPoolExecutor1394 } finally {1395 mainLock.unlock();1396 }1397 tryTerminate();1398 }1399 1400 /**1401 * Attempts to stop all actively executing tasks, halts the1402 * processing of waiting tasks, and returns a list of the tasks1403 * that were awaiting execution. These tasks are drained (removed)1404 * from the task queue upon return from this method.1405 *1406 *

This method does not wait for actively executing tasks to1407 * terminate. Use {@link #awaitTermination awaitTermination} to1408 * do that.1409 *1410 *

There are no guarantees beyond best-effort attempts to stop1411 * processing actively executing tasks. This implementation1412 * cancels tasks via {@link Thread#interrupt}, so any task that1413 * fails to respond to interrupts may never terminate.1414 *1415 * @throws SecurityException {@inheritDoc}1416 */1417 public List

shutdownNow() {1418 List
tasks;1419 final ReentrantLock mainLock = this.mainLock;1420 mainLock.lock();1421 try {1422 checkShutdownAccess();1423 advanceRunState(STOP);1424 interruptWorkers();1425 tasks = drainQueue();1426 } finally {1427 mainLock.unlock();1428 }1429 tryTerminate();1430 return tasks;1431 }1432 1433 public boolean isShutdown() {1434 return ! isRunning(ctl.get());1435 }1436 1437 /**1438 * Returns true if this executor is in the process of terminating1439 * after {@link #shutdown} or {@link #shutdownNow} but has not1440 * completely terminated. This method may be useful for1441 * debugging. A return of {@code true} reported a sufficient1442 * period after shutdown may indicate that submitted tasks have1443 * ignored or suppressed interruption, causing this executor not1444 * to properly terminate.1445 *1446 * @return true if terminating but not yet terminated1447 */1448 public boolean isTerminating() {1449 int c = ctl.get();1450 return ! isRunning(c) && runStateLessThan(c, TERMINATED);1451 }1452 1453 public boolean isTerminated() {1454 return runStateAtLeast(ctl.get(), TERMINATED);1455 }1456 1457 public boolean awaitTermination(long timeout, TimeUnit unit)1458 throws InterruptedException {1459 long nanos = unit.toNanos(timeout);1460 final ReentrantLock mainLock = this.mainLock;1461 mainLock.lock();1462 try {1463 for (;;) {1464 if (runStateAtLeast(ctl.get(), TERMINATED))1465 return true;1466 if (nanos <= 0)1467 return false;1468 nanos = termination.awaitNanos(nanos);1469 }1470 } finally {1471 mainLock.unlock();1472 }1473 }1474 1475 /**1476 * Invokes {@code shutdown} when this executor is no longer1477 * referenced and it has no threads.1478 */1479 protected void finalize() {1480 shutdown();1481 }1482 1483 /**1484 * Sets the thread factory used to create new threads.1485 *1486 * @param threadFactory the new thread factory1487 * @throws NullPointerException if threadFactory is null1488 * @see #getThreadFactory1489 */1490 public void setThreadFactory(ThreadFactory threadFactory) {1491 if (threadFactory == null)1492 throw new NullPointerException();1493 this.threadFactory = threadFactory;1494 }1495 1496 /**1497 * Returns the thread factory used to create new threads.1498 *1499 * @return the current thread factory1500 * @see #setThreadFactory1501 */1502 public ThreadFactory getThreadFactory() {1503 return threadFactory;1504 }1505 1506 /**1507 * Sets a new handler for unexecutable tasks.1508 *1509 * @param handler the new handler1510 * @throws NullPointerException if handler is null1511 * @see #getRejectedExecutionHandler1512 */1513 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {1514 if (handler == null)1515 throw new NullPointerException();1516 this.handler = handler;1517 }1518 1519 /**1520 * Returns the current handler for unexecutable tasks.1521 *1522 * @return the current handler1523 * @see #setRejectedExecutionHandler1524 */1525 public RejectedExecutionHandler getRejectedExecutionHandler() {1526 return handler;1527 }1528 1529 /**1530 * Sets the core number of threads. This overrides any value set1531 * in the constructor. If the new value is smaller than the1532 * current value, excess existing threads will be terminated when1533 * they next become idle. If larger, new threads will, if needed,1534 * be started to execute any queued tasks.1535 *1536 * @param corePoolSize the new core size1537 * @throws IllegalArgumentException if {@code corePoolSize < 0}1538 * @see #getCorePoolSize1539 */1540 public void setCorePoolSize(int corePoolSize) {1541 if (corePoolSize < 0)1542 throw new IllegalArgumentException();1543 int delta = corePoolSize - this.corePoolSize;1544 this.corePoolSize = corePoolSize;1545 if (workerCountOf(ctl.get()) > corePoolSize)1546 interruptIdleWorkers();1547 else if (delta > 0) {1548 // We don't really know how many new threads are "needed".1549 // As a heuristic, prestart enough new workers (up to new1550 // core size) to handle the current number of tasks in1551 // queue, but stop if queue becomes empty while doing so.1552 int k = Math.min(delta, workQueue.size());1553 while (k-- > 0 && addWorker(null, true)) {1554 if (workQueue.isEmpty())1555 break;1556 }1557 }1558 }1559 1560 /**1561 * Returns the core number of threads.1562 *1563 * @return the core number of threads1564 * @see #setCorePoolSize1565 */1566 public int getCorePoolSize() {1567 return corePoolSize;1568 }1569 1570 /**1571 * Starts a core thread, causing it to idly wait for work. This1572 * overrides the default policy of starting core threads only when1573 * new tasks are executed. This method will return {@code false}1574 * if all core threads have already been started.1575 *1576 * @return {@code true} if a thread was started1577 */1578 public boolean prestartCoreThread() {1579 return workerCountOf(ctl.get()) < corePoolSize &&1580 addWorker(null, true);1581 }1582 1583 /**1584 * Same as prestartCoreThread except arranges that at least one1585 * thread is started even if corePoolSize is 0.1586 */1587 void ensurePrestart() {1588 int wc = workerCountOf(ctl.get());1589 if (wc < corePoolSize)1590 addWorker(null, true);1591 else if (wc == 0)1592 addWorker(null, false);1593 }1594 1595 /**1596 * Starts all core threads, causing them to idly wait for work. This1597 * overrides the default policy of starting core threads only when1598 * new tasks are executed.1599 *1600 * @return the number of threads started1601 */1602 public int prestartAllCoreThreads() {1603 int n = 0;1604 while (addWorker(null, true))1605 ++n;1606 return n;1607 }1608 1609 /**1610 * Returns true if this pool allows core threads to time out and1611 * terminate if no tasks arrive within the keepAlive time, being1612 * replaced if needed when new tasks arrive. When true, the same1613 * keep-alive policy applying to non-core threads applies also to1614 * core threads. When false (the default), core threads are never1615 * terminated due to lack of incoming tasks.1616 *1617 * @return {@code true} if core threads are allowed to time out,1618 * else {@code false}1619 *1620 * @since 1.61621 */1622 public boolean allowsCoreThreadTimeOut() {1623 return allowCoreThreadTimeOut;1624 }1625 1626 /**1627 * Sets the policy governing whether core threads may time out and1628 * terminate if no tasks arrive within the keep-alive time, being1629 * replaced if needed when new tasks arrive. When false, core1630 * threads are never terminated due to lack of incoming1631 * tasks. When true, the same keep-alive policy applying to1632 * non-core threads applies also to core threads. To avoid1633 * continual thread replacement, the keep-alive time must be1634 * greater than zero when setting {@code true}. This method1635 * should in general be called before the pool is actively used.1636 *1637 * @param value {@code true} if should time out, else {@code false}1638 * @throws IllegalArgumentException if value is {@code true}1639 * and the current keep-alive time is not greater than zero1640 *1641 * @since 1.61642 */1643 public void allowCoreThreadTimeOut(boolean value) {1644 if (value && keepAliveTime <= 0)1645 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");1646 if (value != allowCoreThreadTimeOut) {1647 allowCoreThreadTimeOut = value;1648 if (value)1649 interruptIdleWorkers();1650 }1651 }1652 1653 /**1654 * Sets the maximum allowed number of threads. This overrides any1655 * value set in the constructor. If the new value is smaller than1656 * the current value, excess existing threads will be1657 * terminated when they next become idle.1658 *1659 * @param maximumPoolSize the new maximum1660 * @throws IllegalArgumentException if the new maximum is1661 * less than or equal to zero, or1662 * less than the {@linkplain #getCorePoolSize core pool size}1663 * @see #getMaximumPoolSize1664 */1665 public void setMaximumPoolSize(int maximumPoolSize) {1666 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)1667 throw new IllegalArgumentException();1668 this.maximumPoolSize = maximumPoolSize;1669 if (workerCountOf(ctl.get()) > maximumPoolSize)1670 interruptIdleWorkers();1671 }1672 1673 /**1674 * Returns the maximum allowed number of threads.1675 *1676 * @return the maximum allowed number of threads1677 * @see #setMaximumPoolSize1678 */1679 public int getMaximumPoolSize() {1680 return maximumPoolSize;1681 }1682 1683 /**1684 * Sets the time limit for which threads may remain idle before1685 * being terminated. If there are more than the core number of1686 * threads currently in the pool, after waiting this amount of1687 * time without processing a task, excess threads will be1688 * terminated. This overrides any value set in the constructor.1689 *1690 * @param time the time to wait. A time value of zero will cause1691 * excess threads to terminate immediately after executing tasks.1692 * @param unit the time unit of the {@code time} argument1693 * @throws IllegalArgumentException if {@code time} less than zero or1694 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}1695 * @see #getKeepAliveTime1696 */1697 public void setKeepAliveTime(long time, TimeUnit unit) {1698 if (time < 0)1699 throw new IllegalArgumentException();1700 if (time == 0 && allowsCoreThreadTimeOut())1701 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");1702 long keepAliveTime = unit.toNanos(time);1703 long delta = keepAliveTime - this.keepAliveTime;1704 this.keepAliveTime = keepAliveTime;1705 if (delta < 0)1706 interruptIdleWorkers();1707 }1708 1709 /**1710 * Returns the thread keep-alive time, which is the amount of time1711 * that threads in excess of the core pool size may remain1712 * idle before being terminated.1713 *1714 * @param unit the desired time unit of the result1715 * @return the time limit1716 * @see #setKeepAliveTime1717 */1718 public long getKeepAliveTime(TimeUnit unit) {1719 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);1720 }1721 1722 /* User-level queue utilities */1723 1724 /**1725 * Returns the task queue used by this executor. Access to the1726 * task queue is intended primarily for debugging and monitoring.1727 * This queue may be in active use. Retrieving the task queue1728 * does not prevent queued tasks from executing.1729 *1730 * @return the task queue1731 */1732 public BlockingQueue
getQueue() {1733 return workQueue;1734 }1735 1736 /**1737 * Removes this task from the executor's internal queue if it is1738 * present, thus causing it not to be run if it has not already1739 * started.1740 *1741 *

This method may be useful as one part of a cancellation1742 * scheme. It may fail to remove tasks that have been converted1743 * into other forms before being placed on the internal queue. For1744 * example, a task entered using {@code submit} might be1745 * converted into a form that maintains {@code Future} status.1746 * However, in such cases, method {@link #purge} may be used to1747 * remove those Futures that have been cancelled.1748 *1749 * @param task the task to remove1750 * @return true if the task was removed1751 */1752 public boolean remove(Runnable task) {1753 boolean removed = workQueue.remove(task);1754 tryTerminate(); // In case SHUTDOWN and now empty1755 return removed;1756 }1757 1758 /**1759 * Tries to remove from the work queue all {@link Future}1760 * tasks that have been cancelled. This method can be useful as a1761 * storage reclamation operation, that has no other impact on1762 * functionality. Cancelled tasks are never executed, but may1763 * accumulate in work queues until worker threads can actively1764 * remove them. Invoking this method instead tries to remove them now.1765 * However, this method may fail to remove tasks in1766 * the presence of interference by other threads.1767 */1768 public void purge() {1769 final BlockingQueue

q = workQueue;1770 try {1771 Iterator
it = q.iterator();1772 while (it.hasNext()) {1773 Runnable r = it.next();1774 if (r instanceof Future
&& ((Future
)r).isCancelled())1775 it.remove();1776 }1777 } catch (ConcurrentModificationException fallThrough) {1778 // Take slow path if we encounter interference during traversal.1779 // Make copy for traversal and call remove for cancelled entries.1780 // The slow path is more likely to be O(N*N).1781 for (Object r : q.toArray())1782 if (r instanceof Future
&& ((Future
)r).isCancelled())1783 q.remove(r);1784 }1785 1786 tryTerminate(); // In case SHUTDOWN and now empty1787 }1788 1789 /* Statistics */1790 1791 /**1792 * Returns the current number of threads in the pool.1793 *1794 * @return the number of threads1795 */1796 public int getPoolSize() {1797 final ReentrantLock mainLock = this.mainLock;1798 mainLock.lock();1799 try {1800 // Remove rare and surprising possibility of1801 // isTerminated() && getPoolSize() > 01802 return runStateAtLeast(ctl.get(), TIDYING) ? 01803 : workers.size();1804 } finally {1805 mainLock.unlock();1806 }1807 }1808 1809 /**1810 * Returns the approximate number of threads that are actively1811 * executing tasks.1812 *1813 * @return the number of threads1814 */1815 public int getActiveCount() {1816 final ReentrantLock mainLock = this.mainLock;1817 mainLock.lock();1818 try {1819 int n = 0;1820 for (Worker w : workers)1821 if (w.isLocked())1822 ++n;1823 return n;1824 } finally {1825 mainLock.unlock();1826 }1827 }1828 1829 /**1830 * Returns the largest number of threads that have ever1831 * simultaneously been in the pool.1832 *1833 * @return the number of threads1834 */1835 public int getLargestPoolSize() {1836 final ReentrantLock mainLock = this.mainLock;1837 mainLock.lock();1838 try {1839 return largestPoolSize;1840 } finally {1841 mainLock.unlock();1842 }1843 }1844 1845 /**1846 * Returns the approximate total number of tasks that have ever been1847 * scheduled for execution. Because the states of tasks and1848 * threads may change dynamically during computation, the returned1849 * value is only an approximation.1850 *1851 * @return the number of tasks1852 */1853 public long getTaskCount() {1854 final ReentrantLock mainLock = this.mainLock;1855 mainLock.lock();1856 try {1857 long n = completedTaskCount;1858 for (Worker w : workers) {1859 n += w.completedTasks;1860 if (w.isLocked())1861 ++n;1862 }1863 return n + workQueue.size();1864 } finally {1865 mainLock.unlock();1866 }1867 }1868 1869 /**1870 * Returns the approximate total number of tasks that have1871 * completed execution. Because the states of tasks and threads1872 * may change dynamically during computation, the returned value1873 * is only an approximation, but one that does not ever decrease1874 * across successive calls.1875 *1876 * @return the number of tasks1877 */1878 public long getCompletedTaskCount() {1879 final ReentrantLock mainLock = this.mainLock;1880 mainLock.lock();1881 try {1882 long n = completedTaskCount;1883 for (Worker w : workers)1884 n += w.completedTasks;1885 return n;1886 } finally {1887 mainLock.unlock();1888 }1889 }1890 1891 /**1892 * Returns a string identifying this pool, as well as its state,1893 * including indications of run state and estimated worker and1894 * task counts.1895 *1896 * @return a string identifying this pool, as well as its state1897 */1898 public String toString() {1899 long ncompleted;1900 int nworkers, nactive;1901 final ReentrantLock mainLock = this.mainLock;1902 mainLock.lock();1903 try {1904 ncompleted = completedTaskCount;1905 nactive = 0;1906 nworkers = workers.size();1907 for (Worker w : workers) {1908 ncompleted += w.completedTasks;1909 if (w.isLocked())1910 ++nactive;1911 }1912 } finally {1913 mainLock.unlock();1914 }1915 int c = ctl.get();1916 String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" :1917 (runStateAtLeast(c, TERMINATED) ? "Terminated" :1918 "Shutting down"));1919 return super.toString() +1920 "[" + rs +1921 ", pool size = " + nworkers +1922 ", active threads = " + nactive +1923 ", queued tasks = " + workQueue.size() +1924 ", completed tasks = " + ncompleted +1925 "]";1926 }1927 1928 /* Extension hooks */1929 1930 /**1931 * Method invoked prior to executing the given Runnable in the1932 * given thread. This method is invoked by thread {@code t} that1933 * will execute task {@code r}, and may be used to re-initialize1934 * ThreadLocals, or to perform logging.1935 *1936 *

This implementation does nothing, but may be customized in1937 * subclasses. Note: To properly nest multiple overridings, subclasses1938 * should generally invoke {@code super.beforeExecute} at the end of1939 * this method.1940 *1941 * @param t the thread that will run task {@code r}1942 * @param r the task that will be executed1943 */1944 protected void beforeExecute(Thread t, Runnable r) { }1945 1946 /**1947 * Method invoked upon completion of execution of the given Runnable.1948 * This method is invoked by the thread that executed the task. If1949 * non-null, the Throwable is the uncaught {@code RuntimeException}1950 * or {@code Error} that caused execution to terminate abruptly.1951 *1952 *

This implementation does nothing, but may be customized in1953 * subclasses. Note: To properly nest multiple overridings, subclasses1954 * should generally invoke {@code super.afterExecute} at the1955 * beginning of this method.1956 *1957 *

Note: When actions are enclosed in tasks (such as1958 * {@link FutureTask}) either explicitly or via methods such as1959 * {@code submit}, these task objects catch and maintain1960 * computational exceptions, and so they do not cause abrupt1961 * termination, and the internal exceptions are not1962 * passed to this method. If you would like to trap both kinds of1963 * failures in this method, you can further probe for such cases,1964 * as in this sample subclass that prints either the direct cause1965 * or the underlying exception if a task has been aborted:1966 *1967 *

 {@code1968      * class ExtendedExecutor extends ThreadPoolExecutor {1969      *   // ...1970      *   protected void afterExecute(Runnable r, Throwable t) {1971      *     super.afterExecute(r, t);1972      *     if (t == null && r instanceof Future
) {1973 * try {1974 * Object result = ((Future
) r).get();1975 * } catch (CancellationException ce) {1976 * t = ce;1977 * } catch (ExecutionException ee) {1978 * t = ee.getCause();1979 * } catch (InterruptedException ie) {1980 * Thread.currentThread().interrupt(); // ignore/reset1981 * }1982 * }1983 * if (t != null)1984 * System.out.println(t);1985 * }1986 * }}
1987 *1988 * @param r the runnable that has completed1989 * @param t the exception that caused termination, or null if1990 * execution completed normally1991 */1992 protected void afterExecute(Runnable r, Throwable t) { }1993 1994 /**1995 * Method invoked when the Executor has terminated. Default1996 * implementation does nothing. Note: To properly nest multiple1997 * overridings, subclasses should generally invoke1998 * {@code super.terminated} within this method.1999 */2000 protected void terminated() { }2001 2002 /* Predefined RejectedExecutionHandlers */2003 2004 /**2005 * A handler for rejected tasks that runs the rejected task2006 * directly in the calling thread of the {@code execute} method,2007 * unless the executor has been shut down, in which case the task2008 * is discarded.2009 */2010 public static class CallerRunsPolicy implements RejectedExecutionHandler {2011 /**2012 * Creates a {@code CallerRunsPolicy}.2013 */2014 public CallerRunsPolicy() { }2015 2016 /**2017 * Executes task r in the caller's thread, unless the executor2018 * has been shut down, in which case the task is discarded.2019 *2020 * @param r the runnable task requested to be executed2021 * @param e the executor attempting to execute this task2022 */2023 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {2024 if (!e.isShutdown()) {2025 r.run();2026 }2027 }2028 }2029 2030 /**2031 * A handler for rejected tasks that throws a2032 * {@code RejectedExecutionException}.2033 */2034 public static class AbortPolicy implements RejectedExecutionHandler {2035 /**2036 * Creates an {@code AbortPolicy}.2037 */2038 public AbortPolicy() { }2039 2040 /**2041 * Always throws RejectedExecutionException.2042 *2043 * @param r the runnable task requested to be executed2044 * @param e the executor attempting to execute this task2045 * @throws RejectedExecutionException always.2046 */2047 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {2048 throw new RejectedExecutionException("Task " + r.toString() +2049 " rejected from " +2050 e.toString());2051 }2052 }2053 2054 /**2055 * A handler for rejected tasks that silently discards the2056 * rejected task.2057 */2058 public static class DiscardPolicy implements RejectedExecutionHandler {2059 /**2060 * Creates a {@code DiscardPolicy}.2061 */2062 public DiscardPolicy() { }2063 2064 /**2065 * Does nothing, which has the effect of discarding task r.2066 *2067 * @param r the runnable task requested to be executed2068 * @param e the executor attempting to execute this task2069 */2070 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {2071 }2072 }2073 2074 /**2075 * A handler for rejected tasks that discards the oldest unhandled2076 * request and then retries {@code execute}, unless the executor2077 * is shut down, in which case the task is discarded.2078 */2079 public static class DiscardOldestPolicy implements RejectedExecutionHandler {2080 /**2081 * Creates a {@code DiscardOldestPolicy} for the given executor.2082 */2083 public DiscardOldestPolicy() { }2084 2085 /**2086 * Obtains and ignores the next task that the executor2087 * would otherwise execute, if one is immediately available,2088 * and then retries execution of task r, unless the executor2089 * is shut down, in which case task r is instead discarded.2090 *2091 * @param r the runnable task requested to be executed2092 * @param e the executor attempting to execute this task2093 */2094 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {2095 if (!e.isShutdown()) {2096 e.getQueue().poll();2097 e.execute(r);2098 }2099 }2100 }2101 }

 

线程池源码分析

(一) 创建“线程池”

下面以newFixedThreadPool()介绍线程池的创建过程。

1. newFixedThreadPool()

newFixedThreadPool()在Executors.java中定义,源码如下:

public static ExecutorService newFixedThreadPool(int nThreads) {    return new ThreadPoolExecutor(nThreads, nThreads,                                  0L, TimeUnit.MILLISECONDS,                                  new LinkedBlockingQueue
());}

说明:newFixedThreadPool(int nThreads)的作用是创建一个线程池,线程池的容量是nThreads。

         newFixedThreadPool()在调用ThreadPoolExecutor()时,会传递一个LinkedBlockingQueue()对象,而LinkedBlockingQueue是单向链表实现的阻塞队列。在线程池中,就是通过该阻塞队列来实现"当线程池中任务数量超过允许的任务数量时,部分任务会阻塞等待"。
关于LinkedBlockingQueue的实现细节,读者可以参考""。

 

2. ThreadPoolExecutor()

ThreadPoolExecutor()在ThreadPoolExecutor.java中定义,源码如下:

public ThreadPoolExecutor(int corePoolSize,                          int maximumPoolSize,                          long keepAliveTime,                          TimeUnit unit,                          BlockingQueue
workQueue) { this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, Executors.defaultThreadFactory(), defaultHandler);}

说明:该函数实际上是调用ThreadPoolExecutor的另外一个构造函数。该函数的源码如下:

public ThreadPoolExecutor(int corePoolSize,                          int maximumPoolSize,                          long keepAliveTime,                          TimeUnit unit,                          BlockingQueue
workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) { if (corePoolSize < 0 || maximumPoolSize <= 0 || maximumPoolSize < corePoolSize || keepAliveTime < 0) throw new IllegalArgumentException(); if (workQueue == null || threadFactory == null || handler == null) throw new NullPointerException(); // 核心池大小 this.corePoolSize = corePoolSize; // 最大池大小 this.maximumPoolSize = maximumPoolSize; // 线程池的等待队列 this.workQueue = workQueue; this.keepAliveTime = unit.toNanos(keepAliveTime); // 线程工厂对象 this.threadFactory = threadFactory; // 拒绝策略的句柄 this.handler = handler;}

说明:在ThreadPoolExecutor()的构造函数中,进行的是初始化工作。

corePoolSize, maximumPoolSize, unit, keepAliveTime和workQueue这些变量的值是已知的,它们都是通过newFixedThreadPool()传递而来。下面看看threadFactory和handler对象。

 

2.1 ThreadFactory

线程池中的ThreadFactory是一个线程工厂,线程池创建线程都是通过线程工厂对象(threadFactory)来完成的。

上面所说的threadFactory对象,是通过 Executors.defaultThreadFactory()返回的。Executors.java中的defaultThreadFactory()源码如下:

public static ThreadFactory defaultThreadFactory() {    return new DefaultThreadFactory();}

defaultThreadFactory()返回DefaultThreadFactory对象。Executors.java中的DefaultThreadFactory()源码如下:

 

static class DefaultThreadFactory implements ThreadFactory {    private static final AtomicInteger poolNumber = new AtomicInteger(1);    private final ThreadGroup group;    private final AtomicInteger threadNumber = new AtomicInteger(1);    private final String namePrefix;    DefaultThreadFactory() {        SecurityManager s = System.getSecurityManager();        group = (s != null) ? s.getThreadGroup() :                              Thread.currentThread().getThreadGroup();        namePrefix = "pool-" +                      poolNumber.getAndIncrement() +                     "-thread-";    }    // 提供创建线程的API。    public Thread newThread(Runnable r) {        // 线程对应的任务是Runnable对象r        Thread t = new Thread(group, r,                              namePrefix + threadNumber.getAndIncrement(),                              0);        // 设为“非守护线程”        if (t.isDaemon())            t.setDaemon(false);        // 将优先级设为“Thread.NORM_PRIORITY”        if (t.getPriority() != Thread.NORM_PRIORITY)            t.setPriority(Thread.NORM_PRIORITY);        return t;    }}

 

说明:ThreadFactory的作用就是提供创建线程的功能的线程工厂。

         它是通过newThread()提供创建线程功能的,下面简单说说newThread()。newThread()创建的线程对应的任务是Runnable对象,它创建的线程都是“非守护线程”而且“线程优先级都是Thread.NORM_PRIORITY”。

 

2.2 RejectedExecutionHandler

handler是ThreadPoolExecutor中拒绝策略的处理句柄。所谓拒绝策略,是指将任务添加到线程池中时,线程池拒绝该任务所采取的相应策略。

线程池默认会采用的是defaultHandler策略,即AbortPolicy策略。在AbortPolicy策略中,线程池拒绝任务时会抛出异常!
defaultHandler的定义如下:

private static final RejectedExecutionHandler defaultHandler = new AbortPolicy();

AbortPolicy的源码如下:

public static class AbortPolicy implements RejectedExecutionHandler {    public AbortPolicy() { }    // 抛出异常    public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {        throw new RejectedExecutionException("Task " + r.toString() +                                             " rejected from " +                                             e.toString());    }}

 

(二) 添加任务到“线程池”

1. execute()

execute()定义在ThreadPoolExecutor.java中,源码如下:

public void execute(Runnable command) {    // 如果任务为null,则抛出异常。    if (command == null)        throw new NullPointerException();    // 获取ctl对应的int值。该int值保存了"线程池中任务的数量"和"线程池状态"信息    int c = ctl.get();    // 当线程池中的任务数量 < "核心池大小"时,即线程池中少于corePoolSize个任务。    // 则通过addWorker(command, true)新建一个线程,并将任务(command)添加到该线程中;然后,启动该线程从而执行任务。    if (workerCountOf(c) < corePoolSize) {        if (addWorker(command, true))            return;        c = ctl.get();    }    // 当线程池中的任务数量 >= "核心池大小"时,    // 而且,"线程池处于允许状态"时,则尝试将任务添加到阻塞队列中。    if (isRunning(c) && workQueue.offer(command)) {        // 再次确认“线程池状态”,若线程池异常终止了,则删除任务;然后通过reject()执行相应的拒绝策略的内容。        int recheck = ctl.get();        if (! isRunning(recheck) && remove(command))            reject(command);        // 否则,如果"线程池中任务数量"为0,则通过addWorker(null, false)尝试新建一个线程,新建线程对应的任务为null。        else if (workerCountOf(recheck) == 0)            addWorker(null, false);    }    // 通过addWorker(command, false)新建一个线程,并将任务(command)添加到该线程中;然后,启动该线程从而执行任务。    // 如果addWorker(command, false)执行失败,则通过reject()执行相应的拒绝策略的内容。    else if (!addWorker(command, false))        reject(command);}

说明:execute()的作用是将任务添加到线程池中执行。它会分为3种情况进行处理:

        情况1 -- 如果"线程池中任务数量" < "核心池大小"时,即线程池中少于corePoolSize个任务;此时就新建一个线程,并将该任务添加到线程中进行执行。
        情况2 -- 如果"线程池中任务数量" >= "核心池大小",并且"线程池是允许状态";此时,则将任务添加到阻塞队列中阻塞等待。在该情况下,会再次确认"线程池的状态",如果"第2次读到的线程池状态"和"第1次读到的线程池状态"不同,则从阻塞队列中删除该任务。
        情况3 -- 非以上两种情况。在这种情况下,尝试新建一个线程,并将该任务添加到线程中进行执行。如果执行失败,则通过reject()拒绝该任务。

 

2. addWorker()

addWorker()的源码如下:

private boolean addWorker(Runnable firstTask, boolean core) {    retry:    // 更新"线程池状态和计数"标记,即更新ctl。    for (;;) {        // 获取ctl对应的int值。该int值保存了"线程池中任务的数量"和"线程池状态"信息        int c = ctl.get();        // 获取线程池状态。        int rs = runStateOf(c);        // 有效性检查        if (rs >= SHUTDOWN &&            ! (rs == SHUTDOWN &&               firstTask == null &&               ! workQueue.isEmpty()))            return false;        for (;;) {            // 获取线程池中任务的数量。            int wc = workerCountOf(c);            // 如果"线程池中任务的数量"超过限制,则返回false。            if (wc >= CAPACITY ||                wc >= (core ? corePoolSize : maximumPoolSize))                return false;            // 通过CAS函数将c的值+1。操作失败的话,则退出循环。            if (compareAndIncrementWorkerCount(c))                break retry;            c = ctl.get();  // Re-read ctl            // 检查"线程池状态",如果与之前的状态不同,则从retry重新开始。            if (runStateOf(c) != rs)                continue retry;            // else CAS failed due to workerCount change; retry inner loop        }    }    boolean workerStarted = false;    boolean workerAdded = false;    Worker w = null;    // 添加任务到线程池,并启动任务所在的线程。    try {        final ReentrantLock mainLock = this.mainLock;        // 新建Worker,并且指定firstTask为Worker的第一个任务。        w = new Worker(firstTask);        // 获取Worker对应的线程。        final Thread t = w.thread;        if (t != null) {            // 获取锁            mainLock.lock();            try {                int c = ctl.get();                int rs = runStateOf(c);                // 再次确认"线程池状态"                if (rs < SHUTDOWN ||                    (rs == SHUTDOWN && firstTask == null)) {                    if (t.isAlive()) // precheck that t is startable                        throw new IllegalThreadStateException();                    // 将Worker对象(w)添加到"线程池的Worker集合(workers)"中                    workers.add(w);                    // 更新largestPoolSize                    int s = workers.size();                    if (s > largestPoolSize)                        largestPoolSize = s;                    workerAdded = true;                }            } finally {                // 释放锁                mainLock.unlock();            }            // 如果"成功将任务添加到线程池"中,则启动任务所在的线程。             if (workerAdded) {                t.start();                workerStarted = true;            }        }    } finally {        if (! workerStarted)            addWorkerFailed(w);    }    // 返回任务是否启动。    return workerStarted;}

说明

    addWorker(Runnable firstTask, boolean core) 的作用是将任务(firstTask)添加到线程池中,并启动该任务。
    core为true的话,则以corePoolSize为界限,若"线程池中已有任务数量>=corePoolSize",则返回false;core为false的话,则以maximumPoolSize为界限,若"线程池中已有任务数量>=maximumPoolSize",则返回false。
    addWorker()会先通过for循环不断尝试更新ctl状态,ctl记录了"线程池中任务数量和线程池状态"。
    更新成功之后,再通过try模块来将任务添加到线程池中,并启动任务所在的线程。

    从addWorker()中,我们能清晰的发现:线程池在添加任务时,会创建任务对应的Worker对象;而一个Workder对象包含一个Thread对象。(01) 通过将Worker对象添加到"线程的workers集合"中,从而实现将任务添加到线程池中。 (02) 通过启动Worker对应的Thread线程,则执行该任务。

 

3. submit()

补充说明一点,submit()实际上也是通过调用execute()实现的,源码如下:

public Future
submit(Runnable task) { if (task == null) throw new NullPointerException(); RunnableFuture
ftask = newTaskFor(task, null); execute(ftask); return ftask;}

 

(三) 关闭“线程池”

shutdown()的源码如下:

public void shutdown() {    final ReentrantLock mainLock = this.mainLock;    // 获取锁    mainLock.lock();    try {        // 检查终止线程池的“线程”是否有权限。        checkShutdownAccess();        // 设置线程池的状态为关闭状态。        advanceRunState(SHUTDOWN);        // 中断线程池中空闲的线程。        interruptIdleWorkers();        // 钩子函数,在ThreadPoolExecutor中没有任何动作。        onShutdown(); // hook for ScheduledThreadPoolExecutor    } finally {        // 释放锁        mainLock.unlock();    }    // 尝试终止线程池    tryTerminate();}

说明:shutdown()的作用是关闭线程池。

 

线程有5种状态:新建状态,就绪状态,运行状态,阻塞状态,死亡状态。线程池也有5种状态;然而,线程池不同于线程,线程池的5种状态是:Running, SHUTDOWN, STOP, TIDYING, TERMINATED。

线程池状态定义代码如下:

private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));private static final int COUNT_BITS = Integer.SIZE - 3;private static final int CAPACITY = (1 << COUNT_BITS) - 1;private static final int RUNNING = -1 << COUNT_BITS;private static final int SHUTDOWN = 0 << COUNT_BITS;private static final int STOP = 1 << COUNT_BITS;private static final int TIDYING = 2 << COUNT_BITS;private static final int TERMINATED = 3 << COUNT_BITS;private static int ctlOf(int rs, int wc) { return rs | wc; }

说明

ctl是一个AtomicInteger类型的原子对象。ctl记录了"线程池中的任务数量"和"线程池状态"2个信息。
ctl共包括32位。其中,高3位表示"线程池状态",低29位表示"线程池中的任务数量"。

RUNNING    -- 对应的高3位值是111。SHUTDOWN   -- 对应的高3位值是000。STOP       -- 对应的高3位值是001。TIDYING    -- 对应的高3位值是010。TERMINATED -- 对应的高3位值是011。

 

线程池各个状态之间的切换如下图所示:

1. RUNNING

(01) 状态说明:线程池处在RUNNING状态时,能够接收新任务,以及对已添加的任务进行处理。

(02) 状态切换:线程池的初始化状态是RUNNING。换句话说,线程池被一旦被创建,就处于RUNNING状态!
道理很简单,在ctl的初始化代码中(如下),就将它初始化为RUNNING状态,并且"任务数量"初始化为0。

private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));

 

2. SHUTDOWN

(01) 状态说明:线程池处在SHUTDOWN状态时,不接收新任务,但能处理已添加的任务。

(02) 状态切换:调用线程池的shutdown()接口时,线程池由RUNNING -> SHUTDOWN。

 

3. STOP

(01) 状态说明:线程池处在STOP状态时,不接收新任务,不处理已添加的任务,并且会中断正在处理的任务。

(02) 状态切换:调用线程池的shutdownNow()接口时,线程池由(RUNNING or SHUTDOWN ) -> STOP。

 

4. TIDYING

(01) 状态说明:当所有的任务已终止,ctl记录的"任务数量"为0,线程池会变为TIDYING状态。当线程池变为TIDYING状态时,会执行钩子函数terminated()。terminated()在ThreadPoolExecutor类中是空的,若用户想在线程池变为TIDYING时,进行相应的处理;可以通过重载terminated()函数来实现。
(02) 状态切换:当线程池在SHUTDOWN状态下,阻塞队列为空并且线程池中执行的任务也为空时,就会由 SHUTDOWN -> TIDYING。
当线程池在STOP状态下,线程池中执行的任务为空时,就会由STOP -> TIDYING。

 

5. TERMINATED

(01) 状态说明:线程池彻底终止,就变成TERMINATED状态。
(02) 状态切换:线程池处在TIDYING状态时,执行完terminated()之后,就会由 TIDYING -> TERMINATED。

 

拒绝策略介绍

线程池的拒绝策略,是指当任务添加到线程池中被拒绝,而采取的处理措施。

当任务添加到线程池中之所以被拒绝,可能是由于:第一,线程池异常关闭。第二,任务数量超过线程池的最大限制。

线程池共包括4种拒绝策略,它们分别是:AbortPolicyCallerRunsPolicyDiscardOldestPolicyDiscardPolicy

AbortPolicy         -- 当任务添加到线程池中被拒绝时,它将抛出 RejectedExecutionException 异常。CallerRunsPolicy    -- 当任务添加到线程池中被拒绝时,会在线程池当前正在运行的Thread线程池中处理被拒绝的任务。DiscardOldestPolicy -- 当任务添加到线程池中被拒绝时,线程池会放弃等待队列中最旧的未处理任务,然后将被拒绝的任务添加到等待队列中。DiscardPolicy       -- 当任务添加到线程池中被拒绝时,线程池将丢弃被拒绝的任务。

线程池默认的处理策略是AbortPolicy!

 

拒绝策略对比和示例

下面通过示例,分别演示线程池的4种拒绝策略。

1. DiscardPolicy 示例

1 import java.lang.reflect.Field; 2 import java.util.concurrent.ArrayBlockingQueue; 3 import java.util.concurrent.ThreadPoolExecutor; 4 import java.util.concurrent.TimeUnit; 5 import java.util.concurrent.ThreadPoolExecutor.DiscardPolicy; 6  7 public class DiscardPolicyDemo { 8  9     private static final int THREADS_SIZE = 1;10     private static final int CAPACITY = 1;11 12     public static void main(String[] args) throws Exception {13 14         // 创建线程池。线程池的"最大池大小"和"核心池大小"都为1(THREADS_SIZE),"线程池"的阻塞队列容量为1(CAPACITY)。15         ThreadPoolExecutor pool = new ThreadPoolExecutor(THREADS_SIZE, THREADS_SIZE, 0, TimeUnit.SECONDS,16                 new ArrayBlockingQueue
(CAPACITY));17 // 设置线程池的拒绝策略为"丢弃"18 pool.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy());19 20 // 新建10个任务,并将它们添加到线程池中。21 for (int i = 0; i < 10; i++) {22 Runnable myrun = new MyRunnable("task-"+i);23 pool.execute(myrun);24 }25 // 关闭线程池26 pool.shutdown();27 }28 }29 30 class MyRunnable implements Runnable {31 private String name;32 public MyRunnable(String name) {33 this.name = name;34 }35 @Override36 public void run() {37 try {38 System.out.println(this.name + " is running.");39 Thread.sleep(100);40 } catch (Exception e) {41 e.printStackTrace();42 }43 }44 }

运行结果

task-0 is running.task-1 is running.

结果说明:线程池pool的"最大池大小"和"核心池大小"都为1(THREADS_SIZE),这意味着"线程池能同时运行的任务数量最大只能是1"。

线程池pool的阻塞队列是,ArrayBlockingQueue是一个有界的阻塞队列,ArrayBlockingQueue的容量为1。这也意味着线程池的阻塞队列只能有一个线程池阻塞等待。
根据""中分析的execute()代码可知:线程池中共运行了2个任务。第1个任务直接放到Worker中,通过线程去执行;第2个任务放到阻塞队列中等待。其他的任务都被丢弃了!

 

2. DiscardOldestPolicy 示例

1 import java.lang.reflect.Field; 2 import java.util.concurrent.ArrayBlockingQueue; 3 import java.util.concurrent.ThreadPoolExecutor; 4 import java.util.concurrent.TimeUnit; 5 import java.util.concurrent.ThreadPoolExecutor.DiscardOldestPolicy; 6  7 public class DiscardOldestPolicyDemo { 8  9     private static final int THREADS_SIZE = 1;10     private static final int CAPACITY = 1;11 12     public static void main(String[] args) throws Exception {13 14         // 创建线程池。线程池的"最大池大小"和"核心池大小"都为1(THREADS_SIZE),"线程池"的阻塞队列容量为1(CAPACITY)。15         ThreadPoolExecutor pool = new ThreadPoolExecutor(THREADS_SIZE, THREADS_SIZE, 0, TimeUnit.SECONDS,16                 new ArrayBlockingQueue
(CAPACITY));17 // 设置线程池的拒绝策略为"DiscardOldestPolicy"18 pool.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardOldestPolicy());19 20 // 新建10个任务,并将它们添加到线程池中。21 for (int i = 0; i < 10; i++) {22 Runnable myrun = new MyRunnable("task-"+i);23 pool.execute(myrun);24 }25 // 关闭线程池26 pool.shutdown();27 }28 }29 30 class MyRunnable implements Runnable {31 private String name;32 public MyRunnable(String name) {33 this.name = name;34 }35 @Override36 public void run() {37 try {38 System.out.println(this.name + " is running.");39 Thread.sleep(200);40 } catch (Exception e) {41 e.printStackTrace();42 }43 }44 }

运行结果

task-0 is running.task-9 is running.

结果说明:将"线程池的拒绝策略"由DiscardPolicy修改为DiscardOldestPolicy之后,当有任务添加到线程池被拒绝时,线程池会丢弃阻塞队列中末尾的任务,然后将被拒绝的任务添加到末尾。

 

3. AbortPolicy 示例

1 import java.lang.reflect.Field; 2 import java.util.concurrent.ArrayBlockingQueue; 3 import java.util.concurrent.ThreadPoolExecutor; 4 import java.util.concurrent.TimeUnit; 5 import java.util.concurrent.ThreadPoolExecutor.AbortPolicy; 6 import java.util.concurrent.RejectedExecutionException; 7  8 public class AbortPolicyDemo { 9 10     private static final int THREADS_SIZE = 1;11     private static final int CAPACITY = 1;12 13     public static void main(String[] args) throws Exception {14 15         // 创建线程池。线程池的"最大池大小"和"核心池大小"都为1(THREADS_SIZE),"线程池"的阻塞队列容量为1(CAPACITY)。16         ThreadPoolExecutor pool = new ThreadPoolExecutor(THREADS_SIZE, THREADS_SIZE, 0, TimeUnit.SECONDS,17                 new ArrayBlockingQueue
(CAPACITY));18 // 设置线程池的拒绝策略为"抛出异常"19 pool.setRejectedExecutionHandler(new ThreadPoolExecutor.AbortPolicy());20 21 try {22 23 // 新建10个任务,并将它们添加到线程池中。24 for (int i = 0; i < 10; i++) {25 Runnable myrun = new MyRunnable("task-"+i);26 pool.execute(myrun);27 }28 } catch (RejectedExecutionException e) {29 e.printStackTrace();30 // 关闭线程池31 pool.shutdown();32 }33 }34 }35 36 class MyRunnable implements Runnable {37 private String name;38 public MyRunnable(String name) {39 this.name = name;40 }41 @Override42 public void run() {43 try {44 System.out.println(this.name + " is running.");45 Thread.sleep(200);46 } catch (Exception e) {47 e.printStackTrace();48 }49 }50 }

(某一次)运行结果

java.util.concurrent.RejectedExecutionException    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1774)    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:768)    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:656)    at AbortPolicyDemo.main(AbortPolicyDemo.java:27)task-0 is running.task-1 is running.

结果说明:将"线程池的拒绝策略"由DiscardPolicy修改为AbortPolicy之后,当有任务添加到线程池被拒绝时,会抛出RejectedExecutionException。

 

4. CallerRunsPolicy 示例

1 import java.lang.reflect.Field; 2 import java.util.concurrent.ArrayBlockingQueue; 3 import java.util.concurrent.ThreadPoolExecutor; 4 import java.util.concurrent.TimeUnit; 5 import java.util.concurrent.ThreadPoolExecutor.CallerRunsPolicy; 6  7 public class CallerRunsPolicyDemo { 8  9     private static final int THREADS_SIZE = 1;10     private static final int CAPACITY = 1;11 12     public static void main(String[] args) throws Exception {13 14         // 创建线程池。线程池的"最大池大小"和"核心池大小"都为1(THREADS_SIZE),"线程池"的阻塞队列容量为1(CAPACITY)。15         ThreadPoolExecutor pool = new ThreadPoolExecutor(THREADS_SIZE, THREADS_SIZE, 0, TimeUnit.SECONDS,16                 new ArrayBlockingQueue
(CAPACITY));17 // 设置线程池的拒绝策略为"CallerRunsPolicy"18 pool.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());19 20 // 新建10个任务,并将它们添加到线程池中。21 for (int i = 0; i < 10; i++) {22 Runnable myrun = new MyRunnable("task-"+i);23 pool.execute(myrun);24 }25 26 // 关闭线程池27 pool.shutdown();28 }29 }30 31 class MyRunnable implements Runnable {32 private String name;33 public MyRunnable(String name) {34 this.name = name;35 }36 @Override37 public void run() {38 try {39 System.out.println(this.name + " is running.");40 Thread.sleep(100);41 } catch (Exception e) {42 e.printStackTrace();43 }44 }45 }

(某一次)运行结果

task-2 is running.task-3 is running.task-4 is running.task-5 is running.task-6 is running.task-7 is running.task-8 is running.task-9 is running.task-0 is running.task-1 is running.

结果说明:将"线程池的拒绝策略"由DiscardPolicy修改为CallerRunsPolicy之后,当有任务添加到线程池被拒绝时,线程池会将被拒绝的任务添加到"线程池正在运行的线程"中取运行。

 

转载于:https://www.cnblogs.com/kexianting/p/8550098.html

你可能感兴趣的文章
Java多线程编程实战:模拟大量数据同步
查看>>
告别webpack react 搭建多页面之痛
查看>>
前嗅ForeSpider教程:链接抽取
查看>>
python奇遇记:深入理解装饰器
查看>>
web-push实现原理及细节介绍
查看>>
MongoDB 分片配置
查看>>
elasticsearch概念详解
查看>>
matplotlib Basic Usage
查看>>
高性能磁盘 I/O 开发学习笔记 -- 软件手段篇
查看>>
ueditor中的问题记录
查看>>
头部组件header.vue
查看>>
AlphaZero进化论:从零开始,制霸所有棋类游戏
查看>>
创新技术重塑未来物联网
查看>>
庖丁解牛!深入剖析React Native下一代架构重构
查看>>
架构师的狂欢—ArchSummit深圳2016等您来约
查看>>
访谈Stuart Davidson:Skyscanner的持续交付推广
查看>>
Vue性能优化:如何实现延迟加载和代码拆分?
查看>>
Visual Studio 2017 15.8第一个预览版发布,支持ARM64
查看>>
Homebrew 1.9发布,将支持Linux与Windows 10
查看>>
JavaScript学习笔记第三天_对象
查看>>