In Windows I use SourceTree, but generally I prefer using the command line.Windows users, which Git UI do you use?
Windows users, which Git UI do you use?
I only ever used SourceTree, but it's a piece of shit. I'm so sick of it.
A coworker recommended me Git Kraken, and Tower also looks pretty neat, so I'm gonna try those two first.
I use a combination of the IntelliJ git integration and some commandline tools. Works well enough for me.
In Windows I use SourceTree, but generally I prefer using the command line.
Github for Windows if you need a GUI, but Git is probably one of the few things that's actually represented fairly well by a command line due to it's state-machine type usage.
Windows users, which Git UI do you use?
I only ever used SourceTree, but it's a piece of shit. I'm so sick of it.
A coworker recommended me Git Kraken, and Tower also looks pretty neat, so I'm gonna try those two first.
Some of those functions are built-in into my IDE. I use version history and blame/annotation frequently.I do still use the CLI rather frequently, but for things like browsing the history, staging changes, or reviewing staged/unstaged changes, I really prefer a GUI.
Windows users, which Git UI do you use?
I only ever used SourceTree, but it's a piece of shit. I'm so sick of it.
A coworker recommended me Git Kraken, and Tower also looks pretty neat, so I'm gonna try those two first.
Hell yeah. I'm so glad we can use Java 8 in the project I'm doing at my summer internship. Almost abusing lambdas and stream functions at this point, while of course keeping it readable. The oldies taking over this code will have to learnI am a Mac user but I use git via Terminal.
Side note: Lambda's are so awesome.
I am just getting back into Android programming after a while and I know that separate Android applications are allowed to communicate with each other in some capacity, but is it possible for one to build a proprietary application that can modify the features of an already existing application?
I don't mean applications like those 3rd Instagram applications (which were most likely built using their api ). I mean is it possible to create an application that would for example run in the background and possibly add features on already existing applications?
For example making an extension application for that runs in the background when you use the Twitter application that could potentially add features, or disable existing features?
(Sorry if this is not directly relevant I did not know where else to post this question)
Potentially, through some weird service calls and intents. But the original app would have to be written to do that, which makes it a bit pointless.Going to repost my question about Android applications from Stack Overflow if anyone has an answer
Going to repost my question about Android applications from Stack Overflow if anyone has an answer
Potentially, through some weird service calls and intents. But the original app would have to be written to do that, which makes it a bit pointless.
Alternatively there's the option of having an overlay which catches touches, but that's dodgy as fuck.
It's not really how Android apps are meant to work. The main way to do it would be like what Facebook have done with messenger - have a button in the main app which calls a custom URL to the second, sending you to the store if you don't have the second app installed.
To quote a poster from many months ago: concurrency is hard. I cannot remember which synchronization points I have are used for what.
That is all.
I'm looking at promises, and I do not see their purpose at all. Like, asynchronous callbacks are much easier to use and understand. What the hell are "thens" and "catches"? Is JS promise trying to do a poor man's version of try-catch?
api(function(result){
api2(function(result2){
api3(function(result3){
// do work
});
});
});
api().then(function(result){
return api2();
}).then(function(result2){
return api3();
}).then(function(result3){
// do work
});
const result = await api();
const result2 = await api2();
const result3 = await api3();
const results = await Promise.all([api(), api2(), api3()]);
I'm looking at promises, and I do not see their purpose at all. Like, asynchronous callbacks are much easier to use and understand. What the hell are "thens" and "catches"? Is JS promise trying to do a poor man's version of try-catch?
parseInt(x)
.andThen(x => x * 2)
.andThen(x => 1/x)
.andThen(x => x.toString())
.orElse(e => `invalid input: ${e}`)
dictionary.get(x)
.andThen(x => x.toString())
.orElse(() => `invalid input: ${e}`)
list.filter(x => x > 10)
.map(x => x * 2)
I've lost my taste for method chaining like this. They increase token count unnecessarily. And indentation. And they're worst of all for replacing null. Promises are good but I'd prefer async-await. Same thing with erroring and nullability. I prefer first class language support for common control flow semantics.Code:parseInt(x) .andThen(x => x * 2) .andThen(x => 1/x) .andThen(x => x.toString()) .orElse(e => `invalid input: ${e}`)
Please let us know how it goes :S I'm still skeptical of metaclasses!Anybody here planning on going to CppCon this year, or am I the only idiot crazy enough to get this into c++?
Turning your application inside out like a klein flask is way harder for me than using functions from one type to another. I wish my problems were as easy as promises. I'm currently trying to put a memory barrier so the gpu doesn't touch some device memory while I update it. And symmetrically, I have to make sure my memory allocator isn't touching that memory while I render using it.I'm looking at promises, and I do not see their purpose at all. Like, asynchronous callbacks are much easier to use and understand. What the hell are "thens" and "catches"? Is JS promise trying to do a poor man's version of try-catch?
Please let us know how it goes :S I'm still skeptical of metaclasses!
I've lost my taste for method chaining like this. They increase token count unnecessarily. And indentation. And they're worst of all for replacing null.
Stuff like this makes me really skeptical of claims from other people about immutability and functional programming being better for concurrency. All the real concurrency I've ever experienced isn't solved by the new fancy stuff at all
Off the top of your head can you think of things you might have used it for? I think a simple "data" metaclass is probably pretty useful. I've wanted something like that before. Common Lisp has metaclasses and I saw some really wildYea there is a whole session on metaclasses so I'm looking forward to it.
I didn't say defer that to the runtime, I said integrate it into the language semantics in a more privileged way. For instance, Swift semantically has option types but in practice the compiler checks execution paths for null checks. Also, Rust introduced its ? operator which unwraps a Some/Ok but early returns a None/Error. Eventually (eventually >.>) they'll stabilize their own try-catch which functions the same as a match decomposition but saves you the ugly method chaining.You can guarantee that there will never be a null pointer exception. Why would you ever want to defer that to runtime where it's harder to catch and more costly if it gets through? And if your answer is "well I'm good at checking" then great, let the language help you.
And thank god for shared state, otherwise things would never be fast. How do I do work cancellation to cancel fork join tasks (like tree searches) which have become unnecessary... without CAS'ing a boolean? Why should I use "persistent" data structures like hash tries when I lose superlinear speedups due to cache locality? Even something basic like having n worker threads depends on using a mutable (and lock-free, of course) work queue. In my task especially I have no choice; I'm updating GPU memory. My PCIe bus (and the memory allocator for that matter) just isn't fast enough to allocate a gigantic chunk of memory every ~32ms frame. And even if it were, I'd be wasting all that time on something unnecessary and losing out on absolute buckets of performance that could have gone to shading.You have two things that can mutate a shared state, functional programming doesn't typically allow that. It's probably hard because you are throwing away concurrency guarantees to reduce a memory footprint.
I didn't say defer that to the runtime, I said integrate it into the language semantics in a more privileged way. For instance, Swift semantically has option types but in practice the compiler checks execution paths for null checks. Also, Rust introduced its ? operator which unwraps a Some/Ok but early returns a None/Error. Eventually (eventually >.>) they'll stabilize their own try-catch which functions the same as a match decomposition but saves you the ugly method chaining.
Although none of that satisfies my biggest gripe with using error values over exceptions, namely the union of error types from other parts of a large application. Ocaml figured this out a long time ago with "polymorphic variants" aka open unions but, well, it isn't efficient, it doesn't namespace, etc. Hopefully koren can correct me here if I'm wrong. Also I think Elm has open unions, I can't remember.
And thank god for shared state, otherwise things would never be fast. How do I do work cancellation to cancel fork join tasks (like tree searches) which have become unnecessary... without CAS'ing a boolean? Why should I use "persistent" data structures like hash tries when I lose superlinear speedups due to cache locality? Even something basic like having n worker threads depends on using a mutable (and lock-free, of course) work queue. In my task especially I have no choice; I'm updating GPU memory. My PCIe bus (and the memory allocator for that matter) just isn't fast enough to allocate a gigantic chunk of memory every ~32ms frame. And even if it were, I'd be wasting all that time on something unnecessary and losing out on absolute buckets of performance that could have gone to shading.
Obviously you minimize this surface area. But you don't get rid of it completely. And for the record, Haskell has MVars, Clojure has atomics, etc. You just try to make the synchronization points as small as possible (because contention is bad for performance, too). Keeping track of unique ownership is a good idea, too. I believe in Go this is enshrined as the adage "share by communicating, don't communicate by sharing".
I agree with forced handling, I'm just saying that it's really onerous when you do it through methods and closures. And you're wrong about it being considered "uglier" - a huge amount of people in Rust clamored for the ? early return syntax and the try! macro had been in the std since the beginning. I think the RFC that added ? was the most commented-on RFC in rust history.Idiomatic Rust prefers method chains over control structures precisely because the latter is considered "uglier" in that community. But what's cool about Option<T> over null is you get a bunch of benefits over null like forcing it to be dealt with and the caller knowing what to expect, but it's a normal Enum with a bit of sugar, no special privilege necessary.
The bolded is pretty rarely the case for me. I've experienced writing subsystems that all have their own set of possible errors but which eventually bubble up to the orchestration of the main program and need to be logged or handled there. I either have to make huge unions that represent every error in the entire application or (more commonly) make separate, redundant definitions for all the subsets I plan on using and implement the Into trait (which is why try! has always expanded into "return error.into()" in the erroring case, btw). My point is that I'd much prefer starting with a large set and then writing subsets of cases as type synonyms.That's all about standard types, you can use strings if you really want. Exceptions are objects in most languages, the difference is using a tagged union type it only comes out of the function in one way that is explicitly represented by the type signature with no upward propagation so you don't wind up with large catch-alls or not catching at all.
This is tautological. If speed doesn't matter, speed doesn't matter? I'm not saying everything has to be as fast as humanly possible, but my brain twitches when people say that the slowdown of Moore's Law + need for concurrency means that people should move to a paradigm of immutability. Speed is the sine qua non of parallelism. If it's not going to be fast, then why are you doing it? And don't underestimate the fantastic slowdown of functional datastructures. My experience is they can be an order to two orders of magnitude slower than the equivalent mutable, data-contiguous, ephemeral solution.Exactly, function programming keeps the sync to a very small surface area where it's most obvious and explicit. And like anything there is a tradeoff, lose formal verification to eek out some more speed out of a domain specific problem. But if you can spare it correctness is usually preferred to something blazing fast that crashes.
x=0;\n\twhile (x < 10) { \n \tx++; \n }
If you're counting those, I have 2 more:Code:x=0;\n\twhile (x < 10) { \n \tx++; \n }
Does anyone know where there's 9 whitespace tokens in the code above?
I'm counting 11 whitespace tokens. They are, in order from left to right:
\n, \t, blank space, blank space, blank space, blank space, \n, \t, blank space, \n, blank space
I'm probably misunderstanding what you're saying, but I can't see why functional would be slower than other paradigms... (Especially since OCaml is often the fastest language in programming speed contests)And don't underestimate the fantastic slowdown of functional datastructures. My experience is they can be an order to two orders of magnitude slower than the equivalent mutable, data-contiguous, ephemeral solution.
I agree with forced handling, I'm just saying that it's really onerous when you do it through methods and closures. And you're wrong about it being considered "uglier" - a huge amount of people in Rust clamored for the ? early return syntax and the try! macro had been in the std since the beginning. I think the RFC that added ? was the most commented-on RFC in rust history.
try!(try!(try!(foo()).bar()).baz())
// becomes:
foo()?.bar()?.baz()?
The bolded is pretty rarely the case for me. I've experienced writing subsystems that all have their own set of possible errors but which eventually bubble up to the orchestration of the main program and need to be logged or handled there. I either have to make huge unions that represent every error in the entire application or (more commonly) make separate, redundant definitions for all the subsets I plan on using and implement the Into trait (which is why try! has always expanded into "return error.into()" in the erroring case, btw). My point is that I'd much prefer starting with a large set and then writing subsets of cases as type synonyms.
This is tautological. If speed doesn't matter, speed doesn't matter? I'm not saying everything has to be as fast as humanly possible, but my brain twitches when people say that the slowdown of Moore's Law + need for concurrency means that people should move to a paradigm of immutability. Speed is the sine qua non of parallelism. If it's not going to be fast, then why are you doing it? And don't underestimate the fantastic slowdown of functional datastructures. My experience is they can be an order to two orders of magnitude slower than the equivalent mutable, data-contiguous, ephemeral solution.
Also boo at "formal verification". I've never seen a functional program actually formally verified beyond what the type system already affords (which is not much except memory and exception safety). The safest programs in the world that are actually, really formally verified are being written in mutable C and Ada under DOD software engineering standards. These are your rockets, your airplanes, your infrastructure, you name it.
Off the top of your head can you think of things you might have used it for? I think a simple "data" metaclass is probably pretty useful. I've wanted something like that before. Common Lisp has metaclasses and I saw some really wildbastardizationsuses for it.
interface foo {
int x();
int y();
};
struct foo {
virtual ~foo() = 0;
virtual int x() = 0;
virtual int y() = 0;
};
If you're counting those, I have 2 more:
\n, \t, blank space, blank space, blank space, blank space, blank space, \n, blank space, \t, blank space, \n, blank space
?
Which tense do you all use for git commit messages? Personally I prefer past tense, but I've seen wildly varying opinions and arguments for different tenses.
imperative
Which tense do you all use for git commit messages? Personally I prefer past tense, but I've seen wildly varying opinions and arguments for different tenses.
Not the paradigm, but the datastructures commonly associated with this. Immutable hash maps, for instance. There's a huge performance cost.I'm probably misunderstanding what you're saying, but I can't see why functional would be slower than other paradigms... (Especially since OCaml is often the fastest language in programming speed contests)
By method chaining I meant using .and_then. I much prefer ? to the former with closures. That's all I mean.? itself is used in method chaining:
Code:try!(try!(try!(foo()).bar()).baz()) // becomes: foo()?.bar()?.baz()?
Fair criticism.What I'm saying is that you need something that works and then make it fast, solve concurrency first. Sometimes you don't even need concurrency at all, a good single threaded app is just fine if your getting good performance already. A fast program that crashes periodically isn't worth much. Users eventually have to use it. I also think you're exaggerating the "slowness" of the underlying structures, it's probably more that you are very familiar with your domain and use some more specific implementations.
This is true. I'm coming at this from a different end, mind you: I listened to a lot of Rich Hickey years ago, drank the kool-aid so to speak, and learned Haskell and all that and while I value my time there, there was a sense of feeling like I had been lied to when I did more advanced concurrency in real applications and I realized all the tricks weren't going to cut it. Suddenly I was back to utilizing mutable state, globals, locks (!), etc.Somethings will look uglier when ported to languages that don't have first class support for certain features. But I'd say null propagation, exception propagation and concurrency are 3 of the biggest things that functional programming really helps with.
Can someone suggest a good Data Structure and Algorithm online course?
I am doing sample problems from Codefights and Codingbat as suggested before, but most of the time I see myself copying and modifying others code rather than coming up with my own functions from scratch.
I need to clear my basic fundamentals, so looking for a proper course that I can take.
Get one of the classic books and go through it doing the exercises. Doing the exercises is the important part.
I personally prefer Cormen.
Anyone here did some automation with outlook before? I wanted to use VBA to automate some stuff. I need a decent guide .
Also good guides on making a GUI with java? I really regret making this project in java instead of C#
So I'm looking into learning Python.
I have done a LOT of scripting with AutoIT and a good amount of Powershell so I'm not a total newbie but I have never really worked with Python.
Any good online ressource you guys could suggest that would help me?
If you know how to WPF in C# then JavaFX is very very similar. Any beginner guide should let you catch up on the differences between the two and it'll be smooth from there. All of the good practices like keeping design out of the code are the same, too.Anyone here did some automation with outlook before? I wanted to use VBA to automate some stuff. I need a decent guide .
Also good guides on making a GUI with java? I really regret making this project in java instead of C#