Koen Bok | 1 Oct 22:30 2014

Showing the Web Inspector from Obj-C [WKWebView/OSX]

I've been looking how to open the web inspector for a WKWebView from code, so I can display it after clicking on a specific button in my app.

Since this commit(1) I can launch the inspector from the right click developer menu, so I'm sure it is possible. I just can't figure out how.

My app is outside the App Store so I'm fine with using private stuff.

Thanks, Koen

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Alexey Proskuryakov | 30 Sep 00:05 2014

New EWS status bubbles in Bugzilla

Hi,

WebKit Bugzilla has new EWS status bubbles now, which will hopefully make it more clear what's going on with a patch. Mysterious yellow bubbles that could mean anything were eliminated, and most importantly, there is now detailed information presented on hover:



Please try it out, and let me know if something breaks, or is not as good as it could be!

- Alexey

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Rik Cabanier | 29 Sep 22:41 2014
Picon

Adding support for gradient midpoint

All,

I'm planning on adding support for gradient midpoints.[1]
Since this is such a small addition, the feature will not be behind a feature flag and will be enabled by default.

Let me know if you have questions or concerns with this approach

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Dan Gohman | 28 Sep 15:44 2014

Re: SIMD support in JavaScript

Hi Nadav,

I agree with much of your assessment of the the proposed SIMD.js API.
However, I don't believe it's unsuitability for some problems
invalidates it for solving other very important problems, which it is
well suited for. Performance portability is actually one of SIMD.js'
biggest strengths: it's not the kind of performance portability that
aims for a consistent percentage of peak on every machine (which, as you
note, of course an explicit 128-bit SIMD API won't achieve), it's the
kind of performance portability that achieves predictable performance
and minimizes surprises across machines (though yes, there are some
unavoidable ones, but overall the picture is quite good).

On 09/26/2014 03:16 PM, Nadav Rotem wrote:
> So far, I’ve explained why I believe SIMD.js will not be
> performance-portable and why it will not utilize modern instruction
> sets, but I have not made a suggestion on how to use vector
> instructions to accelerate JavaScript programs. Vectorization, like
> instruction scheduling and register allocation, is a code-generation
> problem. In order to solve these problems, it is necessary for the
> compiler to have intimate knowledge of the architecture. Forcing the
> compiler to use a specific instruction or a specific data-type is the
> wrong answer. We can learn a lesson from the design of compilers for
> data-parallel languages. GPU programs (shaders and compute languages,
> such as OpenCL and GLSL) are written using vector instructions because
> the domain of the problem requires vectors (colors and coordinates).
> One of the first thing that data-parallel compilers do is to break
> vector instructions into scalars (this process is called
> scalarization). After getting rid of the vectors that resulted from
> the problem domain, the compiler may begin to analyze the program,
> calculate profitability, and make use of the available instruction set.

> I believe that it is the responsibility of JIT compilers to use vector
> instructions. In the implementation of the Webkit’s FTL JIT compiler,
> we took one step in the direction of using vector instructions. LLVM
> already vectorizes some code sequences during instruction selection,
> and we started investigating the use of LLVM’s Loop and SLP
> vectorizers. We found that despite nice performance gains on a number
> of workloads, we experienced some performance regressions on Intel’s
> Sandybridge processors, which is currently a very popular desktop
> processor. JavaScript code contains many branches (due to dynamic
> speculation). Unfortunately, branches on Sandybridge execute on Port5,
> which is also where many vector instructions are executed. So,
> pressure on Port5 prevented performance gains. The LLVM vectorizer
> currently does not model execution port pressure and we had to disable
> vectorization in FTL. In the future, we intend to enable more
> vectorization features in FTL.

This is an example of a weakness of depending on automatic vectorization
alone. High-level language features create complications which can lead
to surprising performance problems. Compiler transformations to target
specialized hardware features often have widely varying applicability.
Expensive analyses can sometimes enable more and better vectorization,
but when a compiler has to do an expensive complex analysis in order to
optimize, it's unlikely that a programmer can count on other compilers
doing the exact same analysis and optimizing in all the same cases. This
is a problem we already face in many areas of compilers, but it's more
pronounced with vectorization than many other optimizations.

In contrast, the proposed SIMD.js has the property that code using it
will not depend on expensive compiler analysis in the JIT, and is much
more likely to deliver predictable performance in practice between
different JIT implementations and across a very practical variety of
hardware architectures.

>
> To summarize, SIMD.js will not provide a portable performance solution
> because vector instruction sets are sparse and vary between
> architectures and generations. Emscripten should not generate vector
> instructions because it can’t model the target machine. SIMD.js will
> not make use of modern SIMD features such as predication or
> scatter/gather. Vectorization is a compiler code generation problem
> that should be solved by JIT compilers, and not by the language
> itself. JIT compilers should continue to evolve and to start
> vectorizing code like modern compilers.

As I mentioned above, performance portability is actually one of
SIMD.js's core strengths.

I have found it useful to think of the API propsed in SIMD.js as a
"short vector" API. It hits a sweet spot, being a convenient size for
many XYZW and RGB/RGBA and similar algorithms, being implementable on a
wide variety of very relevant hardware architectures, being long enough
to deliver worthwhile speedups for many tasks, and being short enough to
still be convenient to manipulate.

I agree that the "short vector" model doesn't address all use cases, so
I also believe a "long vector" approach would be very desirable as well.
Such an approach could be based on automatic loop vectorization, a SPMD
programming model, or something else. I look forward to discussing ideas
for this. Such approaches have the potential to be much more scalable
and adaptable, and can be much better positioned to solve those problems
that the presently proposed SIMD.js API doesn't attempt to solve. I
believe there is room for both approaches to coexist, and to serve
distinct sets of needs.

In fact, a good example of short and long vector models coexisting is in
these popular GPU programming models that you mentioned, where short
vectors represent things in the problem domains like colors and
coordinates, and are then broken down by the compiler to participate in
the long vectors, as you described. It's very plausible that the
proposed SIMD.js could be adapted to combine with a future long-vector
approach in the same way.

Dan
_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Nadav Rotem | 27 Sep 00:16 2014
Picon

SIMD support in JavaScript

Recently members of the JavaScript community at Intel and Mozilla have suggested adding SIMD types to the JavaScript language. In this email would like to share my thoughts about this proposal and to start a technical discussion about SIMD.js support in Webkit. I BCCed some of the authors of the proposal to allow them to participate in this discussion. 

Modern processors feature SIMD (Single Instruction Multiple Data) instructions, which perform the same arithmetic operation on a vector of elements. SIMD instructions are used to accelerate compute intensive code, like image processing algorithms, because the same calculation is applied to every pixel in the image. A single SIMD instruction can process 4 or 8 pixels at the same time. Compilers try to make use of SIMD instructions in an optimization that is called vectorization. 

The SIMD.js API adds new types, such as float32x4, and operators that map to vector instructions on most processors. The idea behind the proposal is that manual use of vector instructions, just like intrinsics in C, will allow developers to accelerate common compute-intensive JavaScript applications. The idea of using SIMD instructions to accelerate JavaScript code is compelling because high performance applications in JavaScript are becoming very popular. 

Before I became involved with JavaScript through my work on the FTL project, I developed the LLVM vectorizer and worked on a vectorizing compiler for a data-parallel programming language. Based on my experience with vectorization, I believe that the current proposal to include SIMD types in the JavaScript language is not the right approach to utilize SIMD instructions. In this email I argue that vector types should not be added to the JavaScript language.

Vector instruction sets are sparse, asymmetrical, and vary in size and features from one generation to another. For example, some Intel processors feature 512-bit wide vector instructions. This means that they can process 16 floating point numbers with one instruction. However, today’s high-end ARM processors feature 128-bit wide vector instructions and can only process 4 floating point elements. ARM processors support byte-sized blend instructions but only recent Intel processors added support for byte-sized blends. ARM processors support variable shifts but only Intel processors with AVX2 support variable shifts. Different generations of Intel processors support different instruction sets with different features such as broadcasting from a local register, 16-bit and 64-bit arithmetic, and varied shuffles. Modern processors even feature predicated arithmetic and scatter/gather instructions that are very difficult to model using target independent high-level intrinsics. 
The designers of the high-level target independent API should decide if they want to support the union of all vector instruction sets, or the intersection. A subset of the vector instructions that represent the intersection of all popular instruction sets is not useable for writing non-trivial vector programs. And the superset of the vector instructions will cause huge performance regressions on platforms that do not support the used instructions.

Code that uses SIMD.js is not performance-portable. Modern vectorizing compilers feature complex cost models and heuristics for deciding when to vectorize, at which vector width, and how many loop iterations to interleave. The cost models takes into account the features of the vector instruction set, properties of the architecture such as the number of vector registers, and properties of the current processor generation. Making a poor selection decision on any of the vectorization parameters can result in a major performance regression. Executing vector intrinsics on processors that don’t support them is slower than executing multiple scalar instructions because the compiler can’t always generate efficient with the same semantics.
I don’t believe that it is possible to write non-trivial vector code that will show performance gains on processors from different families. Executing vector code with insufficient hardware support will cause major performance regressions. One of the motivations for SIMD.js was to allow Emscripten to vectorize C code and to emit JavaScript SIMD intrinsics. One problem with this suggestion is that the Emscripten compiler should not be assuming that the target is an x86 machine and that a specific vector width and interleave width is the right answer. Targeting a specific processor will surely cause regressions on other processors. 

SIMD.js does not make good use of modern vector instruction sets. Modern vector processors feature large vectors (up to 512-bit), predication of arithmetic and memory operations, scatter/gather memory operations, advance shuffles and broadcasts and other features that make vectorization efficient. The current SIMD.js proposal is limited to a small number of arithmetic operations on 128-bit vector data types.

So far, I’ve explained why I believe SIMD.js will not be performance-portable and why it will not utilize modern instruction sets, but I have not made a suggestion on how to use vector instructions to accelerate JavaScript programs. Vectorization, like instruction scheduling and register allocation, is a code-generation problem. In order to solve these problems, it is necessary for the compiler to have intimate knowledge of the architecture. Forcing the compiler to use a specific instruction or a specific data-type is the wrong answer. We can learn a lesson from the design of compilers for data-parallel languages. GPU programs (shaders and compute languages, such as OpenCL and GLSL) are written using vector instructions because the domain of the problem requires vectors (colors and coordinates). One of the first thing that data-parallel compilers do is to break vector instructions into scalars (this process is called scalarization). After getting rid of the vectors that resulted from the problem domain, the compiler may begin to analyze the program, calculate profitability, and make use of the available instruction set. 

I believe that it is the responsibility of JIT compilers to use vector instructions. In the implementation of the Webkit’s FTL JIT compiler, we took one step in the direction of using vector instructions. LLVM already vectorizes some code sequences during instruction selection, and we started investigating the use of LLVM’s Loop and SLP vectorizers. We found that despite nice performance gains on a number of workloads, we experienced some performance regressions on Intel’s Sandybridge processors, which is currently a very popular desktop processor. JavaScript code contains many branches (due to dynamic speculation). Unfortunately, branches on Sandybridge execute on Port5, which is also where many vector instructions are executed. So, pressure on Port5 prevented performance gains. The LLVM vectorizer currently does not model execution port pressure and we had to disable vectorization in FTL. In the future, we intend to enable more vectorization features in FTL.

To summarize, SIMD.js will not provide a portable performance solution because vector instruction sets are sparse and vary between architectures and generations. Emscripten should not generate vector instructions because it can’t model the target machine. SIMD.js will not make use of modern SIMD features such as predication or scatter/gather. Vectorization is a compiler code generation problem that should be solved by JIT compilers, and not by the language itself. JIT compilers should continue to evolve and to start vectorizing code like modern compilers.

Thanks,
Nadav

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Alexey Proskuryakov | 26 Sep 01:20 2014

Fooling with EWS and commit queue

Hi,

I started making changes to the logic of EWS and commit queue, please e-mail me if something breaks, or even
if something begins to behave more strangely than it did before.

- Alexey
Chris Dumez | 25 Sep 20:11 2014
Picon

Type checking / casting helpers

Hi all,

I started working on automatically generating the type casting helpers for HTML/SVG/MathML Elements (e.g. toHTMLDivElement()). Until now, we were generating only the type checking helpers using make_names.pl (e.g. isHTMLDivElement()). The type casting helpers had to be manually defined using NODE_TYPE_CASTS() macro.

The type casting helpers are now automatically generated for most types. Part of the solution involved using a templated function for type casting because the types are forward-declared and we needed to do a static_cast<>() (a reinterpret_cast<>() could be used with forward declarations but wouldn’t be safe due to multiple inheritance).

I initially had macros in place so that toHTMLDivElement() would still work and would be equivalent to downcast<HTMLDivElement>(). The feedback I received is that we should get rid of these macros and just use is<HTMLDivElement>() / downcast<HTMLDivElement>() everywhere.
The new style is very close to C++’s is_class<T>() and Boost’s polymorphic_downcast<T>().

I actually started updating the code to do this but I should have emailed webkit-dev about this beforehand. I apologize for sending this message a bit late.

Please let me know if you have feedback / concerns / questions about this change. I hope that this email gives you a better understanding of why I am making this change.

As I said before, the code base is not fully ported yet so the current situation is not necessarily pretty. I will try and go through the transition as fast as I can, provided that people don’t raise any concerns about this.

Please also note that these new helpers still catch unnecessary type checks / casts. As a matter of fact, those are now caught at build time instead of linking time and should give you a nice “Unnecessary type check” / “Unnecessary type cast” static assertion.

Also note that the plan is to get rid of TYPE_CAST_BASE() macro entirely and extend is<>() / downcast<>() to all types, not just Nodes.

Kr,
--
Chris Dumez - Apple Inc.
Cupertino, CA




_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Osztrogonác Csaba | 25 Sep 12:53 2014
Picon

Windows EWS bots having issues

Hi,

Now 8 machines are connected to the EWS server as Apple Windows EWS,
but it seems 3 of them are out of order, and can't build anything.
If they pick up a patch from the queue again and again, they only
slow down the patch processing. Is there any chance if anybody
from Apple can fix or power off them in the near future? Thanks.

http://webkit-queues.appspot.com/queue-status/win-ews/bots/APPLE-EWS-4
Last Pass: 4 months ago

http://webkit-queues.appspot.com/queue-status/win-ews/bots/APPLE-EWS-5
Last Pass: 1 month, 2 weeks ago

http://webkit-queues.appspot.com/queue-status/win-ews/bots/APPLE-EWS-7
Last Pass: 1 month, 2 weeks ago

br,
Ossy
Benjamin Poulain | 23 Sep 10:28 2014

Easy bugs for grab

Hello WebKittens,

 From time to time, people ask for easy bugs to fix.

Looking at various specs, I see some new features and easy fixes 
available. For example:
-Unprefix cursor zoom-in and zoom-out (easy): 
http://www.w3.org/TR/css3-ui/#cursor
-Element.closest (easy): https://dom.spec.whatwg.org/#dom-element-closest
-Fix the visualization of :nth-child(An+B of selector) in the Inspector 
(probably easy).
-Unprefix your favorite CSS property (easy to hard depending on the 
property).
-etc, etc.

If you want to work on an amazing opensource project, here is an 
opportunity to get started. Shoot me an email if you want to implement 
one of the ideas above.

If nobody wants those, I'll just fix them :)

Benjamin
Daryle Walker | 22 Sep 01:48 2014
Picon

WebView and User Interface Restore

Do WebView instances participate in the Resume feature (with +restoreWindowWithIdentifier: state: completionHandler:, etc.), or do I have to manually handle their state (the web-view’s back-forward list and which item is current) myself?

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev
Lucas Forschler | 19 Sep 22:34 2014
Picon

Configuring and redirecting build.webkit.org to https

Hello Webkit,

We plan to configure build.webkit.org to use https, and will be configuring the server so it will always redirect to https.

Currently, this work is scheduled to happen on Tuesday, September 23rd.

Please email me if you have any questions or concerns with this transition.

Thanks,
Lucas

_______________________________________________
webkit-dev mailing list
webkit-dev <at> lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev

Gmane